Jan 15 00:40:00.366708 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 22:02:13 -00 2026 Jan 15 00:40:00.366731 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:40:00.366971 kernel: BIOS-provided physical RAM map: Jan 15 00:40:00.366982 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 00:40:00.366988 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 00:40:00.366995 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 00:40:00.367002 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 00:40:00.367009 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 00:40:00.367153 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 00:40:00.367161 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 00:40:00.367168 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 15 00:40:00.367178 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 00:40:00.367184 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 00:40:00.367191 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 00:40:00.367198 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 00:40:00.367205 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 00:40:00.367285 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 00:40:00.367293 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 00:40:00.367300 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 00:40:00.367307 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 00:40:00.367313 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 00:40:00.367320 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 00:40:00.367327 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 00:40:00.367334 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:40:00.367340 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 00:40:00.367347 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 00:40:00.367357 kernel: NX (Execute Disable) protection: active Jan 15 00:40:00.367364 kernel: APIC: Static calls initialized Jan 15 00:40:00.367370 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 15 00:40:00.367377 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 15 00:40:00.367384 kernel: extended physical RAM map: Jan 15 00:40:00.367391 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 00:40:00.367398 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 00:40:00.367405 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 00:40:00.367417 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 00:40:00.367517 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 00:40:00.367528 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 00:40:00.367543 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 00:40:00.367555 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 15 00:40:00.367567 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 15 00:40:00.367582 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 15 00:40:00.367595 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 15 00:40:00.367605 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 15 00:40:00.367615 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 00:40:00.367624 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 00:40:00.367634 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 00:40:00.367644 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 00:40:00.367654 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 00:40:00.367667 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 00:40:00.367679 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 00:40:00.367691 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 00:40:00.367698 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 00:40:00.367705 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 00:40:00.367712 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 00:40:00.367719 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 00:40:00.367726 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 00:40:00.368009 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 00:40:00.368018 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 00:40:00.368096 kernel: efi: EFI v2.7 by EDK II Jan 15 00:40:00.368104 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 15 00:40:00.368177 kernel: random: crng init done Jan 15 00:40:00.368188 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 15 00:40:00.368262 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 15 00:40:00.368269 kernel: secureboot: Secure boot disabled Jan 15 00:40:00.368277 kernel: SMBIOS 2.8 present. Jan 15 00:40:00.368284 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 15 00:40:00.368291 kernel: DMI: Memory slots populated: 1/1 Jan 15 00:40:00.368298 kernel: Hypervisor detected: KVM Jan 15 00:40:00.368305 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 00:40:00.368312 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 00:40:00.368320 kernel: kvm-clock: using sched offset of 18363159453 cycles Jan 15 00:40:00.368328 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 00:40:00.368338 kernel: tsc: Detected 2445.424 MHz processor Jan 15 00:40:00.368346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 00:40:00.368353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 00:40:00.368361 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 00:40:00.368368 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 15 00:40:00.368376 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 00:40:00.368383 kernel: Using GB pages for direct mapping Jan 15 00:40:00.368393 kernel: ACPI: Early table checksum verification disabled Jan 15 00:40:00.368401 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 15 00:40:00.368408 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 15 00:40:00.368416 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368423 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368431 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 15 00:40:00.368438 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368448 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368456 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368463 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 00:40:00.368470 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 15 00:40:00.368478 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 15 00:40:00.368485 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 15 00:40:00.368493 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 15 00:40:00.368503 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 15 00:40:00.368510 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 15 00:40:00.368517 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 15 00:40:00.368525 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 15 00:40:00.368532 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 15 00:40:00.368539 kernel: No NUMA configuration found Jan 15 00:40:00.368546 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 15 00:40:00.368558 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 15 00:40:00.368576 kernel: Zone ranges: Jan 15 00:40:00.368587 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 00:40:00.368597 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 15 00:40:00.368607 kernel: Normal empty Jan 15 00:40:00.368617 kernel: Device empty Jan 15 00:40:00.368627 kernel: Movable zone start for each node Jan 15 00:40:00.368637 kernel: Early memory node ranges Jan 15 00:40:00.368651 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 15 00:40:00.368994 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 15 00:40:00.369005 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 15 00:40:00.369013 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 15 00:40:00.369020 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 15 00:40:00.369028 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 15 00:40:00.369035 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 15 00:40:00.369042 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 15 00:40:00.369127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 15 00:40:00.369135 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:40:00.369152 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 15 00:40:00.369162 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 15 00:40:00.369170 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 00:40:00.369177 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 15 00:40:00.369185 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 15 00:40:00.369193 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 15 00:40:00.369200 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 15 00:40:00.369210 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 15 00:40:00.369218 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 00:40:00.369226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 00:40:00.369234 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 00:40:00.369244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 00:40:00.369251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 00:40:00.369259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 00:40:00.369267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 00:40:00.369274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 00:40:00.369282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 00:40:00.369289 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 00:40:00.369299 kernel: TSC deadline timer available Jan 15 00:40:00.369307 kernel: CPU topo: Max. logical packages: 1 Jan 15 00:40:00.369315 kernel: CPU topo: Max. logical dies: 1 Jan 15 00:40:00.369322 kernel: CPU topo: Max. dies per package: 1 Jan 15 00:40:00.369330 kernel: CPU topo: Max. threads per core: 1 Jan 15 00:40:00.369337 kernel: CPU topo: Num. cores per package: 4 Jan 15 00:40:00.369345 kernel: CPU topo: Num. threads per package: 4 Jan 15 00:40:00.369352 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 00:40:00.369362 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 00:40:00.369370 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 00:40:00.369378 kernel: kvm-guest: setup PV sched yield Jan 15 00:40:00.369385 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 15 00:40:00.369393 kernel: Booting paravirtualized kernel on KVM Jan 15 00:40:00.369401 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 00:40:00.369409 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 00:40:00.369419 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 00:40:00.369426 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 00:40:00.369434 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 00:40:00.369441 kernel: kvm-guest: PV spinlocks enabled Jan 15 00:40:00.369449 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 00:40:00.369531 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:40:00.369540 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 00:40:00.369551 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 00:40:00.369559 kernel: Fallback order for Node 0: 0 Jan 15 00:40:00.369567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 15 00:40:00.369575 kernel: Policy zone: DMA32 Jan 15 00:40:00.369582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 00:40:00.369590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 00:40:00.369597 kernel: ftrace: allocating 40097 entries in 157 pages Jan 15 00:40:00.369608 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 00:40:00.369615 kernel: Dynamic Preempt: voluntary Jan 15 00:40:00.369623 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 00:40:00.369631 kernel: rcu: RCU event tracing is enabled. Jan 15 00:40:00.369639 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 00:40:00.369647 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 00:40:00.369654 kernel: Rude variant of Tasks RCU enabled. Jan 15 00:40:00.369662 kernel: Tracing variant of Tasks RCU enabled. Jan 15 00:40:00.369672 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 00:40:00.369679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 00:40:00.369951 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:40:00.369966 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:40:00.369978 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 00:40:00.369989 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 00:40:00.369999 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 00:40:00.370015 kernel: Console: colour dummy device 80x25 Jan 15 00:40:00.370028 kernel: printk: legacy console [ttyS0] enabled Jan 15 00:40:00.370041 kernel: ACPI: Core revision 20240827 Jan 15 00:40:00.370052 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 00:40:00.370063 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 00:40:00.370074 kernel: x2apic enabled Jan 15 00:40:00.370084 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 00:40:00.370099 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 00:40:00.370110 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 00:40:00.370121 kernel: kvm-guest: setup PV IPIs Jan 15 00:40:00.370133 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 00:40:00.370146 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 15 00:40:00.370158 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 15 00:40:00.370166 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 00:40:00.370177 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 00:40:00.370185 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 00:40:00.370193 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 00:40:00.370200 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 00:40:00.370208 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 00:40:00.370216 kernel: Speculative Store Bypass: Vulnerable Jan 15 00:40:00.370223 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 00:40:00.370234 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 00:40:00.370323 kernel: active return thunk: srso_alias_return_thunk Jan 15 00:40:00.370332 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 00:40:00.370340 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 00:40:00.370348 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 00:40:00.370356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 00:40:00.370363 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 00:40:00.370374 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 00:40:00.370382 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 00:40:00.370390 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 00:40:00.370398 kernel: Freeing SMP alternatives memory: 32K Jan 15 00:40:00.370405 kernel: pid_max: default: 32768 minimum: 301 Jan 15 00:40:00.370413 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 00:40:00.370421 kernel: landlock: Up and running. Jan 15 00:40:00.370431 kernel: SELinux: Initializing. Jan 15 00:40:00.370439 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:40:00.370446 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 00:40:00.370454 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 00:40:00.370462 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 00:40:00.370469 kernel: signal: max sigframe size: 1776 Jan 15 00:40:00.370477 kernel: rcu: Hierarchical SRCU implementation. Jan 15 00:40:00.370488 kernel: rcu: Max phase no-delay instances is 400. Jan 15 00:40:00.370496 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 00:40:00.370503 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 00:40:00.370511 kernel: smp: Bringing up secondary CPUs ... Jan 15 00:40:00.370519 kernel: smpboot: x86: Booting SMP configuration: Jan 15 00:40:00.370526 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 00:40:00.370534 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 00:40:00.370544 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 15 00:40:00.370552 kernel: Memory: 2441096K/2565800K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15432K init, 2608K bss, 118764K reserved, 0K cma-reserved) Jan 15 00:40:00.370560 kernel: devtmpfs: initialized Jan 15 00:40:00.370568 kernel: x86/mm: Memory block size: 128MB Jan 15 00:40:00.370575 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 15 00:40:00.370583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 15 00:40:00.370591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 15 00:40:00.370601 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 15 00:40:00.370609 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 15 00:40:00.370617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 15 00:40:00.370625 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 00:40:00.370633 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 00:40:00.370640 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 00:40:00.370648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 00:40:00.370658 kernel: audit: initializing netlink subsys (disabled) Jan 15 00:40:00.370665 kernel: audit: type=2000 audit(1768437587.948:1): state=initialized audit_enabled=0 res=1 Jan 15 00:40:00.370673 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 00:40:00.370680 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 00:40:00.370688 kernel: cpuidle: using governor menu Jan 15 00:40:00.370695 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 00:40:00.370703 kernel: dca service started, version 1.12.1 Jan 15 00:40:00.370713 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 15 00:40:00.370721 kernel: PCI: Using configuration type 1 for base access Jan 15 00:40:00.370729 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 00:40:00.370948 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 00:40:00.370957 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 00:40:00.370965 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 00:40:00.370972 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 00:40:00.370984 kernel: ACPI: Added _OSI(Module Device) Jan 15 00:40:00.370992 kernel: ACPI: Added _OSI(Processor Device) Jan 15 00:40:00.370999 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 00:40:00.371007 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 00:40:00.371015 kernel: ACPI: Interpreter enabled Jan 15 00:40:00.371022 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 00:40:00.371035 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 00:40:00.371052 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 00:40:00.371064 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 00:40:00.371075 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 00:40:00.371085 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 00:40:00.371403 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 00:40:00.371627 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 00:40:00.372105 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 00:40:00.372119 kernel: PCI host bridge to bus 0000:00 Jan 15 00:40:00.372377 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 00:40:00.372595 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 00:40:00.373037 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 00:40:00.373234 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 15 00:40:00.373476 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 15 00:40:00.373700 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 15 00:40:00.374136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 00:40:00.374368 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 00:40:00.374633 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 00:40:00.375155 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 15 00:40:00.375366 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 15 00:40:00.375570 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 15 00:40:00.376078 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 00:40:00.376288 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 15625 usecs Jan 15 00:40:00.376502 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 00:40:00.376713 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 15 00:40:00.377212 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 15 00:40:00.377422 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 15 00:40:00.377637 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 00:40:00.378127 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 15 00:40:00.378356 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 15 00:40:00.378570 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 15 00:40:00.379048 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 00:40:00.379315 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 15 00:40:00.379530 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 15 00:40:00.380020 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 15 00:40:00.380284 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 15 00:40:00.380544 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 00:40:00.380989 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 00:40:00.381200 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 14648 usecs Jan 15 00:40:00.381460 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 00:40:00.381697 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 15 00:40:00.382151 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 15 00:40:00.382366 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 00:40:00.382618 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 15 00:40:00.382636 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 00:40:00.382648 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 00:40:00.382658 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 00:40:00.382675 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 00:40:00.382688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 00:40:00.382702 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 00:40:00.382713 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 00:40:00.382721 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 00:40:00.382728 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 00:40:00.382961 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 00:40:00.382974 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 00:40:00.382982 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 00:40:00.382990 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 00:40:00.382998 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 00:40:00.383006 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 00:40:00.383013 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 00:40:00.383021 kernel: iommu: Default domain type: Translated Jan 15 00:40:00.383031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 00:40:00.383039 kernel: efivars: Registered efivars operations Jan 15 00:40:00.383047 kernel: PCI: Using ACPI for IRQ routing Jan 15 00:40:00.383055 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 00:40:00.383063 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 15 00:40:00.383071 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 15 00:40:00.383078 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 15 00:40:00.383089 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 15 00:40:00.383096 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 15 00:40:00.383104 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 15 00:40:00.383112 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 15 00:40:00.383119 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 15 00:40:00.383334 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 00:40:00.383538 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 00:40:00.384050 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 00:40:00.384065 kernel: vgaarb: loaded Jan 15 00:40:00.384073 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 00:40:00.384081 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 00:40:00.384089 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 00:40:00.384097 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 00:40:00.384104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 00:40:00.384117 kernel: pnp: PnP ACPI init Jan 15 00:40:00.384339 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 15 00:40:00.384352 kernel: pnp: PnP ACPI: found 6 devices Jan 15 00:40:00.384360 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 00:40:00.384368 kernel: NET: Registered PF_INET protocol family Jan 15 00:40:00.384376 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 00:40:00.384384 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 00:40:00.384413 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 00:40:00.384424 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 00:40:00.384432 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 00:40:00.384440 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 00:40:00.384448 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:40:00.384456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 00:40:00.384466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 00:40:00.384474 kernel: NET: Registered PF_XDP protocol family Jan 15 00:40:00.384678 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 15 00:40:00.385188 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 15 00:40:00.385404 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 00:40:00.385597 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 00:40:00.386159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 00:40:00.386366 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 15 00:40:00.386557 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 15 00:40:00.386995 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 15 00:40:00.387014 kernel: PCI: CLS 0 bytes, default 64 Jan 15 00:40:00.387030 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 15 00:40:00.387043 kernel: Initialise system trusted keyrings Jan 15 00:40:00.387061 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 00:40:00.387072 kernel: Key type asymmetric registered Jan 15 00:40:00.387083 kernel: Asymmetric key parser 'x509' registered Jan 15 00:40:00.387094 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 00:40:00.387105 kernel: io scheduler mq-deadline registered Jan 15 00:40:00.387116 kernel: io scheduler kyber registered Jan 15 00:40:00.387127 kernel: io scheduler bfq registered Jan 15 00:40:00.387141 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 00:40:00.387161 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 00:40:00.387176 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 00:40:00.387187 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 00:40:00.387198 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 00:40:00.387213 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 00:40:00.387224 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 00:40:00.387236 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 00:40:00.387247 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 00:40:00.387261 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 00:40:00.387491 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 00:40:00.389050 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 00:40:00.389328 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T00:39:56 UTC (1768437596) Jan 15 00:40:00.389599 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 15 00:40:00.389616 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 00:40:00.389628 kernel: efifb: probing for efifb Jan 15 00:40:00.389639 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 15 00:40:00.389651 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 15 00:40:00.389671 kernel: efifb: scrolling: redraw Jan 15 00:40:00.389683 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 00:40:00.389694 kernel: Console: switching to colour frame buffer device 160x50 Jan 15 00:40:00.389705 kernel: fb0: EFI VGA frame buffer device Jan 15 00:40:00.389717 kernel: pstore: Using crash dump compression: deflate Jan 15 00:40:00.389728 kernel: pstore: Registered efi_pstore as persistent store backend Jan 15 00:40:00.389997 kernel: NET: Registered PF_INET6 protocol family Jan 15 00:40:00.390019 kernel: Segment Routing with IPv6 Jan 15 00:40:00.390036 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 00:40:00.390045 kernel: NET: Registered PF_PACKET protocol family Jan 15 00:40:00.390054 kernel: Key type dns_resolver registered Jan 15 00:40:00.390062 kernel: IPI shorthand broadcast: enabled Jan 15 00:40:00.390070 kernel: sched_clock: Marking stable (5555073514, 4221552768)->(11294500910, -1517874628) Jan 15 00:40:00.390078 kernel: registered taskstats version 1 Jan 15 00:40:00.390087 kernel: Loading compiled-in X.509 certificates Jan 15 00:40:00.390098 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8b6753a1cbf8103f5806ce5d59781743c62fae9' Jan 15 00:40:00.390107 kernel: Demotion targets for Node 0: null Jan 15 00:40:00.390115 kernel: Key type .fscrypt registered Jan 15 00:40:00.390123 kernel: Key type fscrypt-provisioning registered Jan 15 00:40:00.390131 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 00:40:00.390139 kernel: ima: Allocated hash algorithm: sha1 Jan 15 00:40:00.390148 kernel: ima: No architecture policies found Jan 15 00:40:00.390158 kernel: clk: Disabling unused clocks Jan 15 00:40:00.390166 kernel: Freeing unused kernel image (initmem) memory: 15432K Jan 15 00:40:00.390175 kernel: Write protecting the kernel read-only data: 45056k Jan 15 00:40:00.390183 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 15 00:40:00.390191 kernel: Run /init as init process Jan 15 00:40:00.390200 kernel: with arguments: Jan 15 00:40:00.390208 kernel: /init Jan 15 00:40:00.390219 kernel: with environment: Jan 15 00:40:00.390227 kernel: HOME=/ Jan 15 00:40:00.390235 kernel: TERM=linux Jan 15 00:40:00.390243 kernel: SCSI subsystem initialized Jan 15 00:40:00.390251 kernel: libata version 3.00 loaded. Jan 15 00:40:00.390492 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 00:40:00.390510 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 00:40:00.390999 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 00:40:00.391214 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 00:40:00.391420 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 00:40:00.391719 kernel: scsi host0: ahci Jan 15 00:40:00.392233 kernel: scsi host1: ahci Jan 15 00:40:00.392477 kernel: scsi host2: ahci Jan 15 00:40:00.393162 kernel: scsi host3: ahci Jan 15 00:40:00.393408 kernel: scsi host4: ahci Jan 15 00:40:00.393698 kernel: scsi host5: ahci Jan 15 00:40:00.393714 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 15 00:40:00.393723 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 15 00:40:00.393731 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 15 00:40:00.394020 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 15 00:40:00.394029 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 15 00:40:00.394037 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 15 00:40:00.394045 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 00:40:00.394053 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 00:40:00.394061 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 00:40:00.394069 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 00:40:00.394080 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 00:40:00.394088 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 00:40:00.394096 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:40:00.394104 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 00:40:00.394113 kernel: ata3.00: applying bridge limits Jan 15 00:40:00.394121 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 00:40:00.394129 kernel: ata3.00: configured for UDMA/100 Jan 15 00:40:00.394390 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 00:40:00.394677 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 00:40:00.395202 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 00:40:00.395445 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 00:40:00.395474 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 00:40:00.395490 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 00:40:00.395501 kernel: GPT:16515071 != 27000831 Jan 15 00:40:00.395513 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 00:40:00.395523 kernel: GPT:16515071 != 27000831 Jan 15 00:40:00.395534 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 00:40:00.395545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 00:40:00.396131 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 00:40:00.396151 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 00:40:00.396160 kernel: device-mapper: uevent: version 1.0.3 Jan 15 00:40:00.396169 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 00:40:00.396177 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 00:40:00.396185 kernel: raid6: avx2x4 gen() 28775 MB/s Jan 15 00:40:00.396193 kernel: raid6: avx2x2 gen() 28703 MB/s Jan 15 00:40:00.396201 kernel: raid6: avx2x1 gen() 21660 MB/s Jan 15 00:40:00.396210 kernel: raid6: using algorithm avx2x4 gen() 28775 MB/s Jan 15 00:40:00.396220 kernel: raid6: .... xor() 3887 MB/s, rmw enabled Jan 15 00:40:00.396229 kernel: raid6: using avx2x2 recovery algorithm Jan 15 00:40:00.396237 kernel: xor: automatically using best checksumming function avx Jan 15 00:40:00.396245 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 00:40:00.396253 kernel: BTRFS: device fsid 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 15 00:40:00.396261 kernel: BTRFS info (device dm-0): first mount of filesystem 1fc5e5ba-2a81-4f9e-b722-a47a3e33c106 Jan 15 00:40:00.396269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:40:00.396280 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 00:40:00.396289 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 00:40:00.396297 kernel: loop: module loaded Jan 15 00:40:00.396305 kernel: loop0: detected capacity change from 0 to 100160 Jan 15 00:40:00.396313 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 00:40:00.396322 systemd[1]: Successfully made /usr/ read-only. Jan 15 00:40:00.396338 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:40:00.396353 systemd[1]: Detected virtualization kvm. Jan 15 00:40:00.396367 systemd[1]: Detected architecture x86-64. Jan 15 00:40:00.396379 systemd[1]: Running in initrd. Jan 15 00:40:00.396391 systemd[1]: No hostname configured, using default hostname. Jan 15 00:40:00.396403 systemd[1]: Hostname set to . Jan 15 00:40:00.396419 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:40:00.396431 systemd[1]: Queued start job for default target initrd.target. Jan 15 00:40:00.396442 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:40:00.396454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:40:00.396467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:40:00.396482 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 00:40:00.396496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:40:00.396513 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 00:40:00.396526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 00:40:00.396538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:40:00.396549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:40:00.396561 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:40:00.396577 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:40:00.396591 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:40:00.396605 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:40:00.396616 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:40:00.396629 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:40:00.396640 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:40:00.396652 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:40:00.396668 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 00:40:00.396680 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 00:40:00.396695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:40:00.396708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:40:00.396720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:40:00.396732 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:40:00.397010 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 00:40:00.397032 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 00:40:00.397045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:40:00.397057 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 00:40:00.397069 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 00:40:00.397080 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 00:40:00.397092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:40:00.397108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:40:00.397125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:00.397179 systemd-journald[319]: Collecting audit messages is enabled. Jan 15 00:40:00.397215 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 00:40:00.397229 kernel: audit: type=1130 audit(1768437600.386:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.397243 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:40:00.397257 systemd-journald[319]: Journal started Jan 15 00:40:00.397287 systemd-journald[319]: Runtime Journal (/run/log/journal/a0ab7c61900f45ac9bf98dc5c499f485) is 6M, max 48.1M, 42M free. Jan 15 00:40:00.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.451949 kernel: audit: type=1130 audit(1768437600.428:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.451977 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:40:00.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.472122 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 00:40:00.502705 kernel: audit: type=1130 audit(1768437600.470:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.515691 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:40:00.547090 kernel: audit: type=1130 audit(1768437600.511:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.556049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:40:00.613083 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 00:40:00.625197 kernel: Bridge firewalling registered Jan 15 00:40:00.624316 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 15 00:40:00.635610 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 00:40:00.654271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:40:00.691476 kernel: audit: type=1130 audit(1768437600.666:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.688256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:00.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.739116 kernel: audit: type=1130 audit(1768437600.700:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.739261 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:40:00.785978 kernel: audit: type=1130 audit(1768437600.747:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.786138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:40:00.829372 kernel: audit: type=1130 audit(1768437600.794:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.801295 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 00:40:00.855178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:40:00.865109 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:40:00.913214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:40:00.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.953996 kernel: audit: type=1130 audit(1768437600.926:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.959297 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:40:00.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:00.988699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 00:40:01.020574 kernel: audit: type=1130 audit(1768437600.980:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:01.021444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:40:01.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:01.044000 audit: BPF prog-id=6 op=LOAD Jan 15 00:40:01.046648 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:40:01.079447 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1042e64ca7212ba2a277cb872bdf1dc4e195c9fb8110078c443b3efbd2488cb9 Jan 15 00:40:01.186039 systemd-resolved[362]: Positive Trust Anchors: Jan 15 00:40:01.186141 systemd-resolved[362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:40:01.186146 systemd-resolved[362]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:40:01.186174 systemd-resolved[362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:40:01.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:01.208130 systemd-resolved[362]: Defaulting to hostname 'linux'. Jan 15 00:40:01.211432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:40:01.222220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:40:01.540980 kernel: Loading iSCSI transport class v2.0-870. Jan 15 00:40:01.570203 kernel: iscsi: registered transport (tcp) Jan 15 00:40:01.608076 kernel: iscsi: registered transport (qla4xxx) Jan 15 00:40:01.608116 kernel: QLogic iSCSI HBA Driver Jan 15 00:40:01.681674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:40:01.733354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:40:01.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:01.745297 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:40:01.921648 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 00:40:01.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:01.946495 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 00:40:01.953030 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 00:40:02.071438 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:40:02.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.079000 audit: BPF prog-id=7 op=LOAD Jan 15 00:40:02.080000 audit: BPF prog-id=8 op=LOAD Jan 15 00:40:02.082048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:40:02.157505 systemd-udevd[589]: Using default interface naming scheme 'v257'. Jan 15 00:40:02.175298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:40:02.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.200569 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 00:40:02.271689 dracut-pre-trigger[627]: rd.md=0: removing MD RAID activation Jan 15 00:40:02.359245 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:40:02.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.380002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:40:02.409651 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:40:02.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.412000 audit: BPF prog-id=9 op=LOAD Jan 15 00:40:02.415958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:40:02.526691 systemd-networkd[734]: lo: Link UP Jan 15 00:40:02.526986 systemd-networkd[734]: lo: Gained carrier Jan 15 00:40:02.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.528996 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:40:02.543334 systemd[1]: Reached target network.target - Network. Jan 15 00:40:02.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.562124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:40:02.592114 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 00:40:02.694319 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 00:40:02.737423 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 00:40:02.786531 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 00:40:02.812102 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 00:40:02.817592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:40:02.842597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 00:40:02.876484 kernel: AES CTR mode by8 optimization enabled Jan 15 00:40:02.913487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:40:02.921149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:02.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:02.935200 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:02.956145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:02.983087 disk-uuid[800]: Primary Header is updated. Jan 15 00:40:02.983087 disk-uuid[800]: Secondary Entries is updated. Jan 15 00:40:02.983087 disk-uuid[800]: Secondary Header is updated. Jan 15 00:40:03.013366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:40:03.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:03.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:03.013501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:03.036181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:03.119589 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:40:03.145151 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 00:40:03.119604 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:40:03.127455 systemd-networkd[734]: eth0: Link UP Jan 15 00:40:03.128098 systemd-networkd[734]: eth0: Gained carrier Jan 15 00:40:03.128114 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:40:03.205451 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:40:03.228371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:03.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:03.284474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 00:40:03.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:03.296330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:40:03.309431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:40:03.329390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:40:03.350431 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 00:40:03.426524 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:40:03.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:04.121440 disk-uuid[819]: Warning: The kernel is still using the old partition table. Jan 15 00:40:04.121440 disk-uuid[819]: The new table will be used at the next reboot or after you Jan 15 00:40:04.121440 disk-uuid[819]: run partprobe(8) or kpartx(8) Jan 15 00:40:04.121440 disk-uuid[819]: The operation has completed successfully. Jan 15 00:40:04.176572 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 00:40:04.177203 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 00:40:04.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:04.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:04.198028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 00:40:04.298114 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Jan 15 00:40:04.316085 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:40:04.316121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:40:04.337473 systemd-networkd[734]: eth0: Gained IPv6LL Jan 15 00:40:04.350210 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:40:04.350227 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:40:04.370957 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:40:04.379378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 00:40:04.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:04.388643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 00:40:04.595255 ignition[892]: Ignition 2.22.0 Jan 15 00:40:04.595351 ignition[892]: Stage: fetch-offline Jan 15 00:40:04.595404 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:04.595417 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:04.595516 ignition[892]: parsed url from cmdline: "" Jan 15 00:40:04.595520 ignition[892]: no config URL provided Jan 15 00:40:04.595526 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 00:40:04.595538 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jan 15 00:40:04.595583 ignition[892]: op(1): [started] loading QEMU firmware config module Jan 15 00:40:04.595589 ignition[892]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 00:40:04.658471 ignition[892]: op(1): [finished] loading QEMU firmware config module Jan 15 00:40:05.537359 ignition[892]: parsing config with SHA512: a776f4b431fe0ad72057de261167d9a4a8ba8910085d16a8257ba04c10fca22eed06caef884934c5870014973ddc30ff91ce9c552babc2fa5e8997167adaca56 Jan 15 00:40:05.573509 unknown[892]: fetched base config from "system" Jan 15 00:40:05.574546 unknown[892]: fetched user config from "qemu" Jan 15 00:40:05.575375 ignition[892]: fetch-offline: fetch-offline passed Jan 15 00:40:05.575468 ignition[892]: Ignition finished successfully Jan 15 00:40:05.607055 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:40:05.654594 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 00:40:05.654625 kernel: audit: type=1130 audit(1768437605.614:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:05.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:05.616274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 00:40:05.617629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 00:40:05.771385 ignition[904]: Ignition 2.22.0 Jan 15 00:40:05.771477 ignition[904]: Stage: kargs Jan 15 00:40:05.771624 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:05.771635 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:05.775629 ignition[904]: kargs: kargs passed Jan 15 00:40:05.775682 ignition[904]: Ignition finished successfully Jan 15 00:40:05.821228 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 00:40:05.832266 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 00:40:05.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:05.883205 kernel: audit: type=1130 audit(1768437605.828:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:05.970395 ignition[912]: Ignition 2.22.0 Jan 15 00:40:05.970409 ignition[912]: Stage: disks Jan 15 00:40:05.970559 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:05.970570 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:05.971530 ignition[912]: disks: disks passed Jan 15 00:40:05.971580 ignition[912]: Ignition finished successfully Jan 15 00:40:06.022483 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 00:40:06.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:06.042349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 00:40:06.073314 kernel: audit: type=1130 audit(1768437606.041:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:06.068211 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 00:40:06.091419 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:40:06.107510 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:40:06.124246 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:40:06.142703 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 00:40:06.239454 systemd-fsck[922]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 00:40:06.246503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 00:40:06.287390 kernel: audit: type=1130 audit(1768437606.258:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:06.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:06.262474 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 00:40:06.627283 kernel: EXT4-fs (vda9): mounted filesystem 6f459a58-5046-4124-bfbc-09321f1e67d8 r/w with ordered data mode. Quota mode: none. Jan 15 00:40:06.629240 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 00:40:06.638049 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 00:40:06.655060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:40:06.704416 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 00:40:06.712369 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 00:40:06.753698 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (930) Jan 15 00:40:06.712427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 00:40:06.790247 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:40:06.790277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:40:06.712455 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:40:06.813154 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:40:06.813178 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:40:06.815249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:40:06.827308 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 00:40:06.839473 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 00:40:06.965439 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 00:40:06.978368 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Jan 15 00:40:07.003057 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 00:40:07.025442 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 00:40:07.339421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 00:40:07.379693 kernel: audit: type=1130 audit(1768437607.345:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.349016 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 00:40:07.405360 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 00:40:07.432471 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 00:40:07.451988 kernel: BTRFS info (device vda6): last unmount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:40:07.492117 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 00:40:07.529614 kernel: audit: type=1130 audit(1768437607.500:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.586076 ignition[1045]: INFO : Ignition 2.22.0 Jan 15 00:40:07.586076 ignition[1045]: INFO : Stage: mount Jan 15 00:40:07.586076 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:07.586076 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:07.623083 ignition[1045]: INFO : mount: mount passed Jan 15 00:40:07.623083 ignition[1045]: INFO : Ignition finished successfully Jan 15 00:40:07.642074 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 00:40:07.683366 kernel: audit: type=1130 audit(1768437607.649:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:07.653177 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 00:40:07.735521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 00:40:07.788397 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1057) Jan 15 00:40:07.788449 kernel: BTRFS info (device vda6): first mount of filesystem 372d586b-dfcb-4c9b-8d15-cc0618567790 Jan 15 00:40:07.806070 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 00:40:07.833426 kernel: BTRFS info (device vda6): turning on async discard Jan 15 00:40:07.833458 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 00:40:07.837166 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 00:40:07.971628 ignition[1074]: INFO : Ignition 2.22.0 Jan 15 00:40:07.971628 ignition[1074]: INFO : Stage: files Jan 15 00:40:07.987435 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:07.987435 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:08.010230 ignition[1074]: DEBUG : files: compiled without relabeling support, skipping Jan 15 00:40:08.023470 ignition[1074]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 00:40:08.023470 ignition[1074]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 00:40:08.064461 ignition[1074]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 00:40:08.078173 ignition[1074]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 00:40:08.090370 ignition[1074]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 00:40:08.084981 unknown[1074]: wrote ssh authorized keys file for user: core Jan 15 00:40:08.129691 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:40:08.129691 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 15 00:40:08.221124 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 00:40:08.322580 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 00:40:08.322580 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:40:08.356046 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 15 00:40:08.829162 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 00:40:10.769708 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 00:40:10.769708 ignition[1074]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 00:40:10.802727 ignition[1074]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 00:40:10.822672 ignition[1074]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 00:40:10.930192 ignition[1074]: INFO : files: files passed Jan 15 00:40:10.930192 ignition[1074]: INFO : Ignition finished successfully Jan 15 00:40:11.102478 kernel: audit: type=1130 audit(1768437610.965:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:10.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:10.947331 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 00:40:10.970066 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 00:40:11.126581 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 00:40:11.139257 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 00:40:11.139473 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 00:40:11.223461 kernel: audit: type=1130 audit(1768437611.172:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.223498 kernel: audit: type=1131 audit(1768437611.172:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.223589 initrd-setup-root-after-ignition[1104]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 00:40:11.244544 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:40:11.244544 initrd-setup-root-after-ignition[1107]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:40:11.276193 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 00:40:11.297196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:40:11.359732 kernel: audit: type=1130 audit(1768437611.311:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.312060 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 00:40:11.360382 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 00:40:11.505438 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 00:40:11.506168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 00:40:11.583098 kernel: audit: type=1130 audit(1768437611.521:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.583156 kernel: audit: type=1131 audit(1768437611.521:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.524286 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 00:40:11.592579 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 00:40:11.613647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 00:40:11.616026 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 00:40:11.703551 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:40:11.766097 kernel: audit: type=1130 audit(1768437611.710:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.713435 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 00:40:11.804067 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 00:40:11.804305 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:40:11.812213 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:40:11.831459 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 00:40:11.918138 kernel: audit: type=1131 audit(1768437611.882:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:11.858622 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 00:40:11.858991 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 00:40:11.916702 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 00:40:11.926224 systemd[1]: Stopped target basic.target - Basic System. Jan 15 00:40:11.945421 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 00:40:11.967469 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 00:40:11.984222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 00:40:12.013166 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 00:40:12.034161 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 00:40:12.057655 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 00:40:12.079700 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 00:40:12.102744 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 00:40:12.203556 kernel: audit: type=1131 audit(1768437612.163:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.124114 systemd[1]: Stopped target swap.target - Swaps. Jan 15 00:40:12.150555 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 00:40:12.150697 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 00:40:12.204077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:40:12.218292 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:40:12.236496 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 00:40:12.279693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:40:12.304096 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 00:40:12.304337 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 00:40:12.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.335101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 00:40:12.335406 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 00:40:12.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.374102 systemd[1]: Stopped target paths.target - Path Units. Jan 15 00:40:12.397114 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 00:40:12.418027 kernel: audit: type=1131 audit(1768437612.320:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.422272 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:40:12.429588 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 00:40:12.450747 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 00:40:12.470972 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 00:40:12.471083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 00:40:12.487237 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 00:40:12.487648 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 00:40:12.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.503664 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 00:40:12.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.503750 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:40:12.521680 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 00:40:12.522177 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 00:40:12.540633 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 00:40:12.540745 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 00:40:12.562680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 00:40:12.631261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 00:40:12.645100 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 00:40:12.651044 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:40:12.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.670366 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 00:40:12.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.696554 ignition[1131]: INFO : Ignition 2.22.0 Jan 15 00:40:12.696554 ignition[1131]: INFO : Stage: umount Jan 15 00:40:12.696554 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 00:40:12.696554 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 00:40:12.696554 ignition[1131]: INFO : umount: umount passed Jan 15 00:40:12.696554 ignition[1131]: INFO : Ignition finished successfully Jan 15 00:40:12.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.670532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:40:12.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.687132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 00:40:12.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.687292 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 00:40:12.715577 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 00:40:12.716001 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 00:40:12.728353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 00:40:12.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.728571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 00:40:12.745187 systemd[1]: Stopped target network.target - Network. Jan 15 00:40:12.760169 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 00:40:12.760293 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 00:40:12.780604 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 00:40:12.780697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 00:40:12.797216 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 00:40:12.797285 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 00:40:12.815176 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 00:40:12.815236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 00:40:12.830386 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 00:40:12.849635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 00:40:12.884279 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 00:40:13.026000 audit: BPF prog-id=6 op=UNLOAD Jan 15 00:40:12.884528 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 00:40:13.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:12.966334 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 00:40:13.028199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 00:40:13.075365 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 00:40:13.074000 audit: BPF prog-id=9 op=UNLOAD Jan 15 00:40:13.095962 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 00:40:13.096110 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:40:13.113694 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 00:40:13.132492 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 00:40:13.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.132597 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 00:40:13.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.150202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 00:40:13.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.150285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:40:13.171267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 00:40:13.171360 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 00:40:13.192272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:40:13.210117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 00:40:13.250275 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 00:40:13.273389 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 00:40:13.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.285005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 00:40:13.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.285090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 00:40:13.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.301471 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 00:40:13.301701 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:40:13.334025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 00:40:13.334139 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 00:40:13.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.349529 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 00:40:13.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.349581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:40:13.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.370185 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 00:40:13.370273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 00:40:13.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.393412 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 00:40:13.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.393476 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 00:40:13.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.411240 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 00:40:13.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.411308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 00:40:13.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.430205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 00:40:13.442737 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 00:40:13.442976 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:40:13.461757 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 00:40:13.461989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:40:13.479587 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 00:40:13.479643 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:40:13.500074 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 00:40:13.500139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:40:13.521310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:40:13.521379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:13.714224 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 00:40:13.727609 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 00:40:13.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.756424 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 00:40:13.756980 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 00:40:13.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:13.774635 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 00:40:13.795585 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 00:40:13.851007 systemd[1]: Switching root. Jan 15 00:40:13.911225 systemd-journald[319]: Journal stopped Jan 15 00:40:17.328296 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 15 00:40:17.328408 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 00:40:17.328432 kernel: SELinux: policy capability open_perms=1 Jan 15 00:40:17.328461 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 00:40:17.328486 kernel: SELinux: policy capability always_check_network=0 Jan 15 00:40:17.328513 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 00:40:17.328530 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 00:40:17.328550 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 00:40:17.328575 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 00:40:17.328593 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 00:40:17.328614 systemd[1]: Successfully loaded SELinux policy in 131.105ms. Jan 15 00:40:17.328643 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.351ms. Jan 15 00:40:17.328667 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 00:40:17.328687 systemd[1]: Detected virtualization kvm. Jan 15 00:40:17.328708 systemd[1]: Detected architecture x86-64. Jan 15 00:40:17.328734 systemd[1]: Detected first boot. Jan 15 00:40:17.328758 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 00:40:17.328973 zram_generator::config[1175]: No configuration found. Jan 15 00:40:17.329008 kernel: Guest personality initialized and is inactive Jan 15 00:40:17.329029 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 00:40:17.329047 kernel: Initialized host personality Jan 15 00:40:17.329068 kernel: NET: Registered PF_VSOCK protocol family Jan 15 00:40:17.329091 systemd[1]: Populated /etc with preset unit settings. Jan 15 00:40:17.329120 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 00:40:17.329141 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 00:40:17.329164 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 00:40:17.329193 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 00:40:17.329215 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 00:40:17.329235 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 00:40:17.329262 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 00:40:17.329294 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 00:40:17.329321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 00:40:17.329346 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 00:40:17.329366 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 00:40:17.329387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 00:40:17.329409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 00:40:17.329428 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 00:40:17.329453 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 00:40:17.329473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 00:40:17.329501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 00:40:17.329527 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 00:40:17.329548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 00:40:17.329569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 00:40:17.329591 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 00:40:17.329611 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 00:40:17.329639 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 00:40:17.329658 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 00:40:17.329680 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 00:40:17.329700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 00:40:17.329721 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 00:40:17.329740 systemd[1]: Reached target slices.target - Slice Units. Jan 15 00:40:17.329762 systemd[1]: Reached target swap.target - Swaps. Jan 15 00:40:17.329975 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 00:40:17.330006 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 00:40:17.330028 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 00:40:17.330047 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 00:40:17.330069 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 15 00:40:17.330088 kernel: audit: type=1335 audit(1768437616.294:102): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 00:40:17.330113 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 00:40:17.330138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 00:40:17.330162 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 00:40:17.330181 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 00:40:17.330202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 00:40:17.330223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 00:40:17.330244 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 00:40:17.330265 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 00:40:17.330290 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 00:40:17.330312 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 00:40:17.330331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:17.330354 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 00:40:17.330373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 00:40:17.330394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 00:40:17.330415 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 00:40:17.330445 systemd[1]: Reached target machines.target - Containers. Jan 15 00:40:17.330465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 00:40:17.330487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:40:17.330509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 00:40:17.330529 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 00:40:17.330550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:40:17.330573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:40:17.330600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:40:17.330619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 00:40:17.330641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:40:17.330660 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 00:40:17.330681 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 00:40:17.330702 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 00:40:17.330723 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 00:40:17.330748 kernel: audit: type=1131 audit(1768437617.006:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.330971 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 00:40:17.331001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:40:17.331023 kernel: audit: type=1131 audit(1768437617.053:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.331042 kernel: audit: type=1334 audit(1768437617.076:105): prog-id=14 op=UNLOAD Jan 15 00:40:17.331062 kernel: audit: type=1334 audit(1768437617.076:106): prog-id=13 op=UNLOAD Jan 15 00:40:17.331087 kernel: audit: type=1334 audit(1768437617.099:107): prog-id=15 op=LOAD Jan 15 00:40:17.331115 kernel: audit: type=1334 audit(1768437617.120:108): prog-id=16 op=LOAD Jan 15 00:40:17.331135 kernel: audit: type=1334 audit(1768437617.140:109): prog-id=17 op=LOAD Jan 15 00:40:17.331153 kernel: ACPI: bus type drm_connector registered Jan 15 00:40:17.331175 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 00:40:17.331194 kernel: fuse: init (API version 7.41) Jan 15 00:40:17.331214 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 00:40:17.331242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 00:40:17.331266 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 00:40:17.331285 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 00:40:17.331339 systemd-journald[1261]: Collecting audit messages is enabled. Jan 15 00:40:17.331382 kernel: audit: type=1305 audit(1768437617.324:110): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 00:40:17.331405 systemd-journald[1261]: Journal started Jan 15 00:40:17.331437 systemd-journald[1261]: Runtime Journal (/run/log/journal/a0ab7c61900f45ac9bf98dc5c499f485) is 6M, max 48.1M, 42M free. Jan 15 00:40:16.294000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 00:40:17.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.076000 audit: BPF prog-id=14 op=UNLOAD Jan 15 00:40:17.076000 audit: BPF prog-id=13 op=UNLOAD Jan 15 00:40:17.099000 audit: BPF prog-id=15 op=LOAD Jan 15 00:40:17.120000 audit: BPF prog-id=16 op=LOAD Jan 15 00:40:17.140000 audit: BPF prog-id=17 op=LOAD Jan 15 00:40:17.324000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 00:40:15.499349 systemd[1]: Queued start job for default target multi-user.target. Jan 15 00:40:15.530724 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 00:40:15.532314 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 00:40:15.533520 systemd[1]: systemd-journald.service: Consumed 4.886s CPU time. Jan 15 00:40:17.324000 audit[1261]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff257bb9b0 a2=4000 a3=0 items=0 ppid=1 pid=1261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:17.358105 kernel: audit: type=1300 audit(1768437617.324:110): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff257bb9b0 a2=4000 a3=0 items=0 ppid=1 pid=1261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:17.324000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 00:40:17.391076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 00:40:17.434230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:17.450361 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 00:40:17.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.461125 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 00:40:17.472367 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 00:40:17.485200 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 00:40:17.496340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 00:40:17.507762 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 00:40:17.520300 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 00:40:17.530713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 00:40:17.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.545459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 00:40:17.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.559751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 00:40:17.560686 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 00:40:17.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.574539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:40:17.575292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:40:17.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.587587 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:40:17.588429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:40:17.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.600637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:40:17.601314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:40:17.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.614151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 00:40:17.615403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 00:40:17.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.628133 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:40:17.628437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:40:17.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.640361 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 00:40:17.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.654155 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 00:40:17.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.669098 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 00:40:17.683185 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 00:40:17.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.698247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 00:40:17.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.726190 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 00:40:17.738617 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 00:40:17.754581 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 00:40:17.767978 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 00:40:17.779154 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 00:40:17.779298 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 00:40:17.791344 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 00:40:17.805468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:40:17.805760 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:40:17.808645 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 00:40:17.823024 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 00:40:17.832747 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:40:17.835368 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 00:40:17.849250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:40:17.852000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 00:40:17.868092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 00:40:17.887432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 00:40:17.898174 systemd-journald[1261]: Time spent on flushing to /var/log/journal/a0ab7c61900f45ac9bf98dc5c499f485 is 50.934ms for 1212 entries. Jan 15 00:40:17.898174 systemd-journald[1261]: System Journal (/var/log/journal/a0ab7c61900f45ac9bf98dc5c499f485) is 8M, max 163.5M, 155.5M free. Jan 15 00:40:17.998654 systemd-journald[1261]: Received client request to flush runtime journal. Jan 15 00:40:17.998743 kernel: loop1: detected capacity change from 0 to 111544 Jan 15 00:40:17.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:17.920027 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 00:40:17.931216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 00:40:17.952399 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 00:40:17.969579 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 00:40:17.992598 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 00:40:18.005198 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 00:40:18.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.054322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 00:40:18.076440 kernel: loop2: detected capacity change from 0 to 119256 Jan 15 00:40:18.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.070462 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 15 00:40:18.070480 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 15 00:40:18.092510 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 00:40:18.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.111452 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 00:40:18.126635 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 00:40:18.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.156121 kernel: loop3: detected capacity change from 0 to 224512 Jan 15 00:40:18.213029 kernel: loop4: detected capacity change from 0 to 111544 Jan 15 00:40:18.222147 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 00:40:18.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.235000 audit: BPF prog-id=18 op=LOAD Jan 15 00:40:18.235000 audit: BPF prog-id=19 op=LOAD Jan 15 00:40:18.235000 audit: BPF prog-id=20 op=LOAD Jan 15 00:40:18.239175 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 00:40:18.250000 audit: BPF prog-id=21 op=LOAD Jan 15 00:40:18.254137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 00:40:18.267441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 00:40:18.286300 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 00:40:18.284000 audit: BPF prog-id=22 op=LOAD Jan 15 00:40:18.284000 audit: BPF prog-id=23 op=LOAD Jan 15 00:40:18.284000 audit: BPF prog-id=24 op=LOAD Jan 15 00:40:18.297949 kernel: loop5: detected capacity change from 0 to 119256 Jan 15 00:40:18.298000 audit: BPF prog-id=25 op=LOAD Jan 15 00:40:18.299000 audit: BPF prog-id=26 op=LOAD Jan 15 00:40:18.299000 audit: BPF prog-id=27 op=LOAD Jan 15 00:40:18.301547 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 00:40:18.340084 kernel: loop6: detected capacity change from 0 to 224512 Jan 15 00:40:18.386013 (sd-merge)[1317]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 00:40:18.405947 systemd-nsresourced[1323]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 00:40:18.408698 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 00:40:18.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.436058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 00:40:18.458576 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 15 00:40:18.458685 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 15 00:40:18.474592 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 15 00:40:18.557536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 00:40:18.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.572661 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 00:40:18.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:18.589743 systemd[1]: Reload requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 00:40:18.590037 systemd[1]: Reloading... Jan 15 00:40:19.264530 kernel: hrtimer: interrupt took 16164123 ns Jan 15 00:40:19.550994 systemd-oomd[1319]: No swap; memory pressure usage will be degraded Jan 15 00:40:19.581951 zram_generator::config[1367]: No configuration found. Jan 15 00:40:19.707278 systemd-resolved[1320]: Positive Trust Anchors: Jan 15 00:40:19.708081 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 00:40:19.708091 systemd-resolved[1320]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 00:40:19.708120 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 00:40:19.724485 systemd-resolved[1320]: Defaulting to hostname 'linux'. Jan 15 00:40:20.144224 systemd[1]: Reloading finished in 1553 ms. Jan 15 00:40:20.312202 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 00:40:20.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:20.328370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 00:40:20.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:20.354627 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 00:40:20.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:20.378457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 00:40:20.423231 systemd[1]: Starting ensure-sysext.service... Jan 15 00:40:20.433738 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 00:40:20.447000 audit: BPF prog-id=28 op=LOAD Jan 15 00:40:20.459000 audit: BPF prog-id=15 op=UNLOAD Jan 15 00:40:20.459000 audit: BPF prog-id=29 op=LOAD Jan 15 00:40:20.459000 audit: BPF prog-id=30 op=LOAD Jan 15 00:40:20.460000 audit: BPF prog-id=16 op=UNLOAD Jan 15 00:40:20.460000 audit: BPF prog-id=17 op=UNLOAD Jan 15 00:40:20.461000 audit: BPF prog-id=31 op=LOAD Jan 15 00:40:20.461000 audit: BPF prog-id=22 op=UNLOAD Jan 15 00:40:20.462000 audit: BPF prog-id=32 op=LOAD Jan 15 00:40:20.462000 audit: BPF prog-id=33 op=LOAD Jan 15 00:40:20.462000 audit: BPF prog-id=23 op=UNLOAD Jan 15 00:40:20.462000 audit: BPF prog-id=24 op=UNLOAD Jan 15 00:40:20.464000 audit: BPF prog-id=34 op=LOAD Jan 15 00:40:20.464000 audit: BPF prog-id=18 op=UNLOAD Jan 15 00:40:20.465000 audit: BPF prog-id=35 op=LOAD Jan 15 00:40:20.465000 audit: BPF prog-id=36 op=LOAD Jan 15 00:40:20.465000 audit: BPF prog-id=19 op=UNLOAD Jan 15 00:40:20.466000 audit: BPF prog-id=20 op=UNLOAD Jan 15 00:40:20.470000 audit: BPF prog-id=37 op=LOAD Jan 15 00:40:20.470000 audit: BPF prog-id=25 op=UNLOAD Jan 15 00:40:20.470000 audit: BPF prog-id=38 op=LOAD Jan 15 00:40:20.471000 audit: BPF prog-id=39 op=LOAD Jan 15 00:40:20.471000 audit: BPF prog-id=26 op=UNLOAD Jan 15 00:40:20.471000 audit: BPF prog-id=27 op=UNLOAD Jan 15 00:40:20.473000 audit: BPF prog-id=40 op=LOAD Jan 15 00:40:20.473000 audit: BPF prog-id=21 op=UNLOAD Jan 15 00:40:20.492547 systemd[1]: Reload requested from client PID 1403 ('systemctl') (unit ensure-sysext.service)... Jan 15 00:40:20.492674 systemd[1]: Reloading... Jan 15 00:40:20.567710 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 00:40:20.567754 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 00:40:20.568269 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 00:40:20.571259 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Jan 15 00:40:20.571343 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Jan 15 00:40:20.594979 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:40:20.594996 systemd-tmpfiles[1404]: Skipping /boot Jan 15 00:40:20.632016 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 00:40:20.632115 systemd-tmpfiles[1404]: Skipping /boot Jan 15 00:40:20.706100 zram_generator::config[1433]: No configuration found. Jan 15 00:40:21.587498 systemd[1]: Reloading finished in 1093 ms. Jan 15 00:40:21.632000 audit: BPF prog-id=41 op=LOAD Jan 15 00:40:21.647182 kernel: kauditd_printk_skb: 69 callbacks suppressed Jan 15 00:40:21.648318 kernel: audit: type=1334 audit(1768437621.632:179): prog-id=41 op=LOAD Jan 15 00:40:21.651316 kernel: audit: type=1334 audit(1768437621.632:180): prog-id=37 op=UNLOAD Jan 15 00:40:21.632000 audit: BPF prog-id=37 op=UNLOAD Jan 15 00:40:21.663973 kernel: audit: type=1334 audit(1768437621.632:181): prog-id=42 op=LOAD Jan 15 00:40:21.632000 audit: BPF prog-id=42 op=LOAD Jan 15 00:40:21.632000 audit: BPF prog-id=43 op=LOAD Jan 15 00:40:21.692190 kernel: audit: type=1334 audit(1768437621.632:182): prog-id=43 op=LOAD Jan 15 00:40:21.632000 audit: BPF prog-id=38 op=UNLOAD Jan 15 00:40:21.712452 kernel: audit: type=1334 audit(1768437621.632:183): prog-id=38 op=UNLOAD Jan 15 00:40:21.722266 kernel: audit: type=1334 audit(1768437621.632:184): prog-id=39 op=UNLOAD Jan 15 00:40:21.632000 audit: BPF prog-id=39 op=UNLOAD Jan 15 00:40:21.731301 kernel: audit: type=1334 audit(1768437621.634:185): prog-id=44 op=LOAD Jan 15 00:40:21.634000 audit: BPF prog-id=44 op=LOAD Jan 15 00:40:21.732759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 00:40:21.755666 kernel: audit: type=1334 audit(1768437621.634:186): prog-id=31 op=UNLOAD Jan 15 00:40:21.789077 kernel: audit: type=1334 audit(1768437621.634:187): prog-id=45 op=LOAD Jan 15 00:40:21.789210 kernel: audit: type=1334 audit(1768437621.634:188): prog-id=46 op=LOAD Jan 15 00:40:21.634000 audit: BPF prog-id=31 op=UNLOAD Jan 15 00:40:21.634000 audit: BPF prog-id=45 op=LOAD Jan 15 00:40:21.634000 audit: BPF prog-id=46 op=LOAD Jan 15 00:40:21.634000 audit: BPF prog-id=32 op=UNLOAD Jan 15 00:40:21.634000 audit: BPF prog-id=33 op=UNLOAD Jan 15 00:40:21.636000 audit: BPF prog-id=47 op=LOAD Jan 15 00:40:21.636000 audit: BPF prog-id=34 op=UNLOAD Jan 15 00:40:21.636000 audit: BPF prog-id=48 op=LOAD Jan 15 00:40:21.636000 audit: BPF prog-id=49 op=LOAD Jan 15 00:40:21.636000 audit: BPF prog-id=35 op=UNLOAD Jan 15 00:40:21.636000 audit: BPF prog-id=36 op=UNLOAD Jan 15 00:40:21.641000 audit: BPF prog-id=50 op=LOAD Jan 15 00:40:21.642000 audit: BPF prog-id=40 op=UNLOAD Jan 15 00:40:21.644000 audit: BPF prog-id=51 op=LOAD Jan 15 00:40:21.644000 audit: BPF prog-id=28 op=UNLOAD Jan 15 00:40:21.644000 audit: BPF prog-id=52 op=LOAD Jan 15 00:40:21.644000 audit: BPF prog-id=53 op=LOAD Jan 15 00:40:21.644000 audit: BPF prog-id=29 op=UNLOAD Jan 15 00:40:21.646000 audit: BPF prog-id=30 op=UNLOAD Jan 15 00:40:21.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:21.795571 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 00:40:21.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:21.858732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:40:21.883359 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 00:40:21.895411 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 00:40:21.910330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 00:40:21.920000 audit: BPF prog-id=8 op=UNLOAD Jan 15 00:40:21.920000 audit: BPF prog-id=7 op=UNLOAD Jan 15 00:40:21.923494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 00:40:21.921000 audit: BPF prog-id=54 op=LOAD Jan 15 00:40:21.921000 audit: BPF prog-id=55 op=LOAD Jan 15 00:40:21.940763 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 00:40:21.957603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:21.958011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:40:21.962683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 00:40:21.973415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 00:40:21.987150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 00:40:21.996390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:40:21.997266 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:40:21.997379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:40:21.997462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:22.004488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:22.004660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:40:22.005035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:40:22.005255 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:40:22.005380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:40:22.005490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:22.014642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:22.015117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 00:40:22.018322 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 00:40:22.026674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 00:40:22.027340 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 00:40:22.027428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 00:40:22.027536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 00:40:22.038199 systemd[1]: Finished ensure-sysext.service. Jan 15 00:40:22.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.051000 audit[1487]: SYSTEM_BOOT pid=1487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.059618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 00:40:22.065618 systemd-udevd[1485]: Using default interface naming scheme 'v257'. Jan 15 00:40:22.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.084064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 00:40:22.084337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 00:40:22.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.101110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 00:40:22.103959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 00:40:22.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.131717 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 00:40:22.132236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 00:40:22.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:22.143583 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 00:40:22.144079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 00:40:22.151949 augenrules[1505]: No rules Jan 15 00:40:22.149000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 00:40:22.149000 audit[1505]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd38c99410 a2=420 a3=0 items=0 ppid=1475 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:22.149000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:40:22.159061 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:40:22.159500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:40:22.173128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 00:40:22.173389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 00:40:22.177297 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 00:40:22.190166 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 00:40:22.207608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 00:40:22.240640 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 00:40:22.272248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 00:40:22.284390 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 00:40:22.551724 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 00:40:22.731499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 00:40:22.745978 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 00:40:22.790506 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 00:40:22.811095 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 00:40:22.810063 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 00:40:22.818532 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 00:40:22.824099 systemd-networkd[1536]: lo: Link UP Jan 15 00:40:22.825017 systemd-networkd[1536]: lo: Gained carrier Jan 15 00:40:22.835196 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 00:40:22.848518 systemd[1]: Reached target network.target - Network. Jan 15 00:40:22.854006 kernel: ACPI: button: Power Button [PWRF] Jan 15 00:40:22.863696 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 00:40:22.881698 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 00:40:23.159735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 00:40:23.490571 systemd-networkd[1536]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:40:23.490721 systemd-networkd[1536]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 00:40:23.496435 systemd-networkd[1536]: eth0: Link UP Jan 15 00:40:23.499151 systemd-networkd[1536]: eth0: Gained carrier Jan 15 00:40:23.499179 systemd-networkd[1536]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 00:40:23.589103 systemd-networkd[1536]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 00:40:23.593239 systemd-timesyncd[1520]: Network configuration changed, trying to establish connection. Jan 15 00:40:25.389461 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 00:40:25.390179 systemd-timesyncd[1520]: Initial clock synchronization to Thu 2026-01-15 00:40:25.388455 UTC. Jan 15 00:40:25.390382 systemd-resolved[1320]: Clock change detected. Flushing caches. Jan 15 00:40:25.407075 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 15 00:40:25.422169 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 00:40:25.446289 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 00:40:25.465463 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 00:40:25.763015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:26.142644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 00:40:26.143243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:26.560013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 00:40:27.225693 systemd-networkd[1536]: eth0: Gained IPv6LL Jan 15 00:40:27.283555 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 00:40:27.298628 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 00:40:27.716330 ldconfig[1478]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 00:40:27.774149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 00:40:27.829403 kernel: kvm_amd: TSC scaling supported Jan 15 00:40:27.830519 kernel: kvm_amd: Nested Virtualization enabled Jan 15 00:40:27.832930 kernel: kvm_amd: Nested Paging enabled Jan 15 00:40:27.842963 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 00:40:27.843185 kernel: kvm_amd: PMU virtualization is disabled Jan 15 00:40:27.842170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 00:40:28.145366 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 00:40:28.279989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 00:40:28.301606 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 00:40:28.315635 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 00:40:28.338449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 00:40:28.358453 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 00:40:28.402204 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 00:40:28.430198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 00:40:28.456538 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 00:40:28.474122 kernel: EDAC MC: Ver: 3.0.0 Jan 15 00:40:28.475314 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 00:40:28.493508 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 00:40:28.523006 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 00:40:28.533548 systemd[1]: Reached target paths.target - Path Units. Jan 15 00:40:28.552026 systemd[1]: Reached target timers.target - Timer Units. Jan 15 00:40:28.607155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 00:40:28.671500 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 00:40:28.746194 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 00:40:28.766176 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 00:40:28.793931 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 00:40:28.848496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 00:40:28.863357 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 00:40:28.879069 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 00:40:28.898529 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 00:40:28.909539 systemd[1]: Reached target basic.target - Basic System. Jan 15 00:40:28.920491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:40:28.920660 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 00:40:28.924267 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 00:40:28.939018 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 00:40:28.953153 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 00:40:28.967563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 00:40:28.987084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 00:40:29.003064 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 00:40:29.014349 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 00:40:29.028038 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 00:40:29.044072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:40:29.066044 jq[1597]: false Jan 15 00:40:29.080979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 00:40:29.115648 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 00:40:29.134599 extend-filesystems[1598]: Found /dev/vda6 Jan 15 00:40:29.169649 extend-filesystems[1598]: Found /dev/vda9 Jan 15 00:40:29.142450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 00:40:29.194476 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing passwd entry cache Jan 15 00:40:29.151458 oslogin_cache_refresh[1599]: Refreshing passwd entry cache Jan 15 00:40:29.206331 extend-filesystems[1598]: Checking size of /dev/vda9 Jan 15 00:40:29.172389 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 00:40:29.223020 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting users, quitting Jan 15 00:40:29.223020 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:40:29.223020 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing group entry cache Jan 15 00:40:29.207227 oslogin_cache_refresh[1599]: Failure getting users, quitting Jan 15 00:40:29.197927 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 00:40:29.207282 oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 00:40:29.207424 oslogin_cache_refresh[1599]: Refreshing group entry cache Jan 15 00:40:29.224380 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 00:40:29.233517 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 00:40:29.234449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 00:40:29.241893 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting groups, quitting Jan 15 00:40:29.241893 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:40:29.240001 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 00:40:29.235296 oslogin_cache_refresh[1599]: Failure getting groups, quitting Jan 15 00:40:29.235311 oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 00:40:29.258413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 00:40:29.307480 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 00:40:29.321364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 00:40:29.322241 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 00:40:29.323625 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 00:40:29.325639 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 00:40:29.339632 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 00:40:29.340415 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 00:40:29.369322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 00:40:29.370937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 00:40:29.506041 extend-filesystems[1598]: Resized partition /dev/vda9 Jan 15 00:40:29.540330 tar[1635]: linux-amd64/LICENSE Jan 15 00:40:29.548417 jq[1623]: true Jan 15 00:40:29.549246 extend-filesystems[1658]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 00:40:29.561658 tar[1635]: linux-amd64/helm Jan 15 00:40:29.565654 update_engine[1621]: I20260115 00:40:29.565369 1621 main.cc:92] Flatcar Update Engine starting Jan 15 00:40:29.572617 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 00:40:29.644057 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 00:40:29.640548 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 00:40:29.641985 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 00:40:29.719181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 00:40:29.769386 jq[1659]: true Jan 15 00:40:29.769218 systemd-logind[1618]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 00:40:29.769267 systemd-logind[1618]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 00:40:29.772389 systemd-logind[1618]: New seat seat0. Jan 15 00:40:29.781939 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 00:40:29.837255 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 00:40:29.843986 dbus-daemon[1595]: [system] SELinux support is enabled Jan 15 00:40:29.845600 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 00:40:29.862039 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 00:40:29.862082 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 00:40:29.876380 update_engine[1621]: I20260115 00:40:29.874117 1621 update_check_scheduler.cc:74] Next update check in 4m52s Jan 15 00:40:29.878099 extend-filesystems[1658]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 00:40:29.878099 extend-filesystems[1658]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 00:40:29.878099 extend-filesystems[1658]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 00:40:29.931123 extend-filesystems[1598]: Resized filesystem in /dev/vda9 Jan 15 00:40:29.882489 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 00:40:29.882512 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 00:40:29.911242 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 00:40:29.940255 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 00:40:29.957104 systemd[1]: Started update-engine.service - Update Engine. Jan 15 00:40:29.961102 dbus-daemon[1595]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 00:40:29.996244 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 00:40:30.136999 bash[1681]: Updated "/home/core/.ssh/authorized_keys" Jan 15 00:40:30.125156 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 00:40:30.141586 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 00:40:31.277527 locksmithd[1671]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 00:40:32.895629 sshd_keygen[1655]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 00:40:32.990213 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 00:40:33.015635 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 00:40:33.096093 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 00:40:33.096579 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 00:40:33.113556 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 00:40:33.250345 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 00:40:33.273590 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 00:40:33.288005 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 00:40:33.305342 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 00:40:33.787048 containerd[1637]: time="2026-01-15T00:40:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 00:40:33.790624 containerd[1637]: time="2026-01-15T00:40:33.790435276Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 00:40:33.814505 containerd[1637]: time="2026-01-15T00:40:33.814388575Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="85.98µs" Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.814590101Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815154064Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815178640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815411254Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815439667Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815519947Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816153 containerd[1637]: time="2026-01-15T00:40:33.815537219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816541 containerd[1637]: time="2026-01-15T00:40:33.816403898Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816541 containerd[1637]: time="2026-01-15T00:40:33.816423494Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816541 containerd[1637]: time="2026-01-15T00:40:33.816438633Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 00:40:33.816541 containerd[1637]: time="2026-01-15T00:40:33.816449463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.818577 containerd[1637]: time="2026-01-15T00:40:33.818127656Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.818577 containerd[1637]: time="2026-01-15T00:40:33.818231230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 00:40:33.818577 containerd[1637]: time="2026-01-15T00:40:33.818410515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.819599 containerd[1637]: time="2026-01-15T00:40:33.819306448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.819599 containerd[1637]: time="2026-01-15T00:40:33.819436320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 00:40:33.819599 containerd[1637]: time="2026-01-15T00:40:33.819452570Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 00:40:33.819599 containerd[1637]: time="2026-01-15T00:40:33.819994683Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 00:40:33.822035 containerd[1637]: time="2026-01-15T00:40:33.821582498Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 00:40:33.822367 containerd[1637]: time="2026-01-15T00:40:33.822125842Z" level=info msg="metadata content store policy set" policy=shared Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.838260414Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839151147Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839369555Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839390966Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839410933Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839541166Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839569829Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839580169Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 00:40:33.839993 containerd[1637]: time="2026-01-15T00:40:33.839592823Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.841291878Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.841330399Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.841344987Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.841357360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.841370474Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842070812Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842103884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842119203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842216685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842231112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842241100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842253223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842264023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 00:40:33.842270 containerd[1637]: time="2026-01-15T00:40:33.842274974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 00:40:33.842529 containerd[1637]: time="2026-01-15T00:40:33.842284581Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 00:40:33.842529 containerd[1637]: time="2026-01-15T00:40:33.842294300Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 00:40:33.842529 containerd[1637]: time="2026-01-15T00:40:33.842398364Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 00:40:33.844044 containerd[1637]: time="2026-01-15T00:40:33.843475906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 00:40:33.844044 containerd[1637]: time="2026-01-15T00:40:33.843656182Z" level=info msg="Start snapshots syncer" Jan 15 00:40:33.844205 containerd[1637]: time="2026-01-15T00:40:33.844066249Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 00:40:33.847375 containerd[1637]: time="2026-01-15T00:40:33.847220569Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 00:40:33.847375 containerd[1637]: time="2026-01-15T00:40:33.847290019Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.847561355Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848071728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848148943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848161767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848176364Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848190720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848203704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848214895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 00:40:33.848225 containerd[1637]: time="2026-01-15T00:40:33.848227690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848241836Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848507081Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848529784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848541836Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848554249Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 00:40:33.848598 containerd[1637]: time="2026-01-15T00:40:33.848562715Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 00:40:33.849182 containerd[1637]: time="2026-01-15T00:40:33.849040898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 00:40:33.849182 containerd[1637]: time="2026-01-15T00:40:33.849139482Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 00:40:33.855378 containerd[1637]: time="2026-01-15T00:40:33.854343911Z" level=info msg="runtime interface created" Jan 15 00:40:33.855378 containerd[1637]: time="2026-01-15T00:40:33.854378916Z" level=info msg="created NRI interface" Jan 15 00:40:33.855378 containerd[1637]: time="2026-01-15T00:40:33.854407990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 00:40:33.855378 containerd[1637]: time="2026-01-15T00:40:33.854438698Z" level=info msg="Connect containerd service" Jan 15 00:40:33.855378 containerd[1637]: time="2026-01-15T00:40:33.854481017Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 00:40:33.861578 containerd[1637]: time="2026-01-15T00:40:33.861260757Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 00:40:33.890505 tar[1635]: linux-amd64/README.md Jan 15 00:40:33.930011 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 00:40:34.292156 containerd[1637]: time="2026-01-15T00:40:34.291645527Z" level=info msg="Start subscribing containerd event" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.292150720Z" level=info msg="Start recovering state" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293517933Z" level=info msg="Start event monitor" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293546456Z" level=info msg="Start cni network conf syncer for default" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293644719Z" level=info msg="Start streaming server" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293658596Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293667442Z" level=info msg="runtime interface starting up..." Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293673954Z" level=info msg="starting plugins..." Jan 15 00:40:34.294020 containerd[1637]: time="2026-01-15T00:40:34.293689624Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 00:40:34.298450 containerd[1637]: time="2026-01-15T00:40:34.298394514Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 00:40:34.300079 containerd[1637]: time="2026-01-15T00:40:34.300052329Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 00:40:34.301556 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 00:40:34.307014 containerd[1637]: time="2026-01-15T00:40:34.306391783Z" level=info msg="containerd successfully booted in 0.527137s" Jan 15 00:40:35.048692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:40:35.064113 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 00:40:35.084086 systemd[1]: Startup finished in 7.873s (kernel) + 15.010s (initrd) + 19.237s (userspace) = 42.122s. Jan 15 00:40:35.114531 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:40:35.458300 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 00:40:35.462657 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:50342.service - OpenSSH per-connection server daemon (10.0.0.1:50342). Jan 15 00:40:35.691568 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 50342 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:35.695260 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:35.720536 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 00:40:35.723579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 00:40:35.739050 systemd-logind[1618]: New session 1 of user core. Jan 15 00:40:35.767679 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 00:40:35.773415 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 00:40:35.799638 (systemd)[1752]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 00:40:35.814485 systemd-logind[1618]: New session c1 of user core. Jan 15 00:40:36.048167 systemd[1752]: Queued start job for default target default.target. Jan 15 00:40:36.057664 systemd[1752]: Created slice app.slice - User Application Slice. Jan 15 00:40:36.058015 systemd[1752]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 00:40:36.058030 systemd[1752]: Reached target paths.target - Paths. Jan 15 00:40:36.058080 systemd[1752]: Reached target timers.target - Timers. Jan 15 00:40:36.061271 systemd[1752]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 00:40:36.064490 systemd[1752]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 00:40:36.095307 systemd[1752]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 00:40:36.095447 systemd[1752]: Reached target sockets.target - Sockets. Jan 15 00:40:36.097353 systemd[1752]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 00:40:36.097536 systemd[1752]: Reached target basic.target - Basic System. Jan 15 00:40:36.097623 systemd[1752]: Reached target default.target - Main User Target. Jan 15 00:40:36.097685 systemd[1752]: Startup finished in 261ms. Jan 15 00:40:36.098150 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 00:40:36.105333 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 00:40:36.110535 kubelet[1735]: E0115 00:40:36.110236 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:40:36.116255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:40:36.116468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:40:36.118098 systemd[1]: kubelet.service: Consumed 4.345s CPU time, 267.2M memory peak. Jan 15 00:40:36.147156 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Jan 15 00:40:36.284309 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:36.286593 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:36.303568 systemd-logind[1618]: New session 2 of user core. Jan 15 00:40:36.321349 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 00:40:36.361116 sshd[1770]: Connection closed by 10.0.0.1 port 50346 Jan 15 00:40:36.362250 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 15 00:40:36.379696 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:50346.service: Deactivated successfully. Jan 15 00:40:36.383343 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 00:40:36.385656 systemd-logind[1618]: Session 2 logged out. Waiting for processes to exit. Jan 15 00:40:36.391561 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:50356.service - OpenSSH per-connection server daemon (10.0.0.1:50356). Jan 15 00:40:36.393659 systemd-logind[1618]: Removed session 2. Jan 15 00:40:36.598472 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 50356 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:36.600659 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:36.614289 systemd-logind[1618]: New session 3 of user core. Jan 15 00:40:36.628304 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 00:40:36.655672 sshd[1779]: Connection closed by 10.0.0.1 port 50356 Jan 15 00:40:36.656462 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 15 00:40:36.671231 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:50356.service: Deactivated successfully. Jan 15 00:40:36.675391 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 00:40:36.678679 systemd-logind[1618]: Session 3 logged out. Waiting for processes to exit. Jan 15 00:40:36.685066 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:50370.service - OpenSSH per-connection server daemon (10.0.0.1:50370). Jan 15 00:40:36.686536 systemd-logind[1618]: Removed session 3. Jan 15 00:40:36.769313 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:36.771036 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:36.783281 systemd-logind[1618]: New session 4 of user core. Jan 15 00:40:36.794191 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 00:40:36.823663 sshd[1788]: Connection closed by 10.0.0.1 port 50370 Jan 15 00:40:36.824484 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 15 00:40:36.838417 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:50370.service: Deactivated successfully. Jan 15 00:40:36.841406 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 00:40:36.844370 systemd-logind[1618]: Session 4 logged out. Waiting for processes to exit. Jan 15 00:40:36.850091 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:50376.service - OpenSSH per-connection server daemon (10.0.0.1:50376). Jan 15 00:40:36.851637 systemd-logind[1618]: Removed session 4. Jan 15 00:40:36.952557 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 50376 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:36.955130 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:36.966163 systemd-logind[1618]: New session 5 of user core. Jan 15 00:40:36.980504 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 00:40:37.030424 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 00:40:37.031269 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:40:37.058645 sudo[1798]: pam_unix(sudo:session): session closed for user root Jan 15 00:40:37.063409 sshd[1797]: Connection closed by 10.0.0.1 port 50376 Jan 15 00:40:37.064061 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 15 00:40:37.076665 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:50376.service: Deactivated successfully. Jan 15 00:40:37.079492 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 00:40:37.081561 systemd-logind[1618]: Session 5 logged out. Waiting for processes to exit. Jan 15 00:40:37.087603 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:50380.service - OpenSSH per-connection server daemon (10.0.0.1:50380). Jan 15 00:40:37.089327 systemd-logind[1618]: Removed session 5. Jan 15 00:40:37.189196 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 50380 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:37.191451 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:37.202290 systemd-logind[1618]: New session 6 of user core. Jan 15 00:40:37.216298 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 00:40:37.250125 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 00:40:37.250560 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:40:37.316630 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 15 00:40:37.331616 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 00:40:37.332346 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:40:37.357080 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 00:40:37.470000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:40:37.474274 augenrules[1831]: No rules Jan 15 00:40:37.475284 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 00:40:37.476025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 00:40:37.481632 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 15 00:40:37.481698 kernel: audit: type=1305 audit(1768437637.470:221): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 00:40:37.478387 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 15 00:40:37.495100 sshd[1807]: Connection closed by 10.0.0.1 port 50380 Jan 15 00:40:37.470000 audit[1831]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc5cb0610 a2=420 a3=0 items=0 ppid=1812 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:37.496126 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 15 00:40:37.527474 kernel: audit: type=1300 audit(1768437637.470:221): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc5cb0610 a2=420 a3=0 items=0 ppid=1812 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:37.527533 kernel: audit: type=1327 audit(1768437637.470:221): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:40:37.470000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 00:40:37.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.565112 kernel: audit: type=1130 audit(1768437637.475:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.565252 kernel: audit: type=1131 audit(1768437637.475:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.589036 kernel: audit: type=1106 audit(1768437637.477:224): pid=1808 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.477000 audit[1808]: USER_END pid=1808 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.594596 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:50380.service: Deactivated successfully. Jan 15 00:40:37.597615 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 00:40:37.600116 systemd-logind[1618]: Session 6 logged out. Waiting for processes to exit. Jan 15 00:40:37.604303 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:50386.service - OpenSSH per-connection server daemon (10.0.0.1:50386). Jan 15 00:40:37.605563 systemd-logind[1618]: Removed session 6. Jan 15 00:40:37.616013 kernel: audit: type=1104 audit(1768437637.477:225): pid=1808 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.477000 audit[1808]: CRED_DISP pid=1808 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.499000 audit[1804]: USER_END pid=1804 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.674118 kernel: audit: type=1106 audit(1768437637.499:226): pid=1804 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.500000 audit[1804]: CRED_DISP pid=1804 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.702159 kernel: audit: type=1104 audit(1768437637.500:227): pid=1804 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:50380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.728009 kernel: audit: type=1131 audit(1768437637.594:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:50380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:50386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.750000 audit[1840]: USER_ACCT pid=1840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.752238 sshd[1840]: Accepted publickey for core from 10.0.0.1 port 50386 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:40:37.753000 audit[1840]: CRED_ACQ pid=1840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.753000 audit[1840]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7919b3b0 a2=3 a3=0 items=0 ppid=1 pid=1840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:37.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:40:37.755217 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:40:37.768154 systemd-logind[1618]: New session 7 of user core. Jan 15 00:40:37.782263 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 00:40:37.788000 audit[1840]: USER_START pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.792000 audit[1843]: CRED_ACQ pid=1843 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:40:37.816645 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 00:40:37.815000 audit[1844]: USER_ACCT pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.816000 audit[1844]: CRED_REFR pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:37.817592 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 00:40:37.822000 audit[1844]: USER_START pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:40:38.547475 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 00:40:38.577447 (dockerd)[1864]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 00:40:39.166367 dockerd[1864]: time="2026-01-15T00:40:39.166194486Z" level=info msg="Starting up" Jan 15 00:40:39.169165 dockerd[1864]: time="2026-01-15T00:40:39.168676387Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 00:40:39.213212 dockerd[1864]: time="2026-01-15T00:40:39.213085918Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 00:40:39.384585 dockerd[1864]: time="2026-01-15T00:40:39.384298994Z" level=info msg="Loading containers: start." Jan 15 00:40:39.417299 kernel: Initializing XFRM netlink socket Jan 15 00:40:39.671000 audit[1919]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.671000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff00c621c0 a2=0 a3=0 items=0 ppid=1864 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:40:39.684000 audit[1921]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.684000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc293e8360 a2=0 a3=0 items=0 ppid=1864 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:40:39.698000 audit[1923]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.698000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7c88f440 a2=0 a3=0 items=0 ppid=1864 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:40:39.711000 audit[1925]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.711000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffbc829e0 a2=0 a3=0 items=0 ppid=1864 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:40:39.726000 audit[1927]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.726000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe59ef4cd0 a2=0 a3=0 items=0 ppid=1864 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:40:39.739000 audit[1929]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.739000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffeaec0170 a2=0 a3=0 items=0 ppid=1864 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:40:39.756000 audit[1931]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.756000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd78434aa0 a2=0 a3=0 items=0 ppid=1864 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:40:39.773000 audit[1933]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.773000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffed1cdb10 a2=0 a3=0 items=0 ppid=1864 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:40:39.896000 audit[1936]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.896000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc3a9b8fc0 a2=0 a3=0 items=0 ppid=1864 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 00:40:39.908000 audit[1938]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.908000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff4be44940 a2=0 a3=0 items=0 ppid=1864 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:40:39.920000 audit[1940]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.920000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff1d602a00 a2=0 a3=0 items=0 ppid=1864 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:40:39.932000 audit[1942]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.932000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe20ad27c0 a2=0 a3=0 items=0 ppid=1864 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:40:39.944000 audit[1944]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:39.944000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde438a460 a2=0 a3=0 items=0 ppid=1864 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:39.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:40:40.161000 audit[1974]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.161000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd886fb150 a2=0 a3=0 items=0 ppid=1864 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 00:40:40.175000 audit[1976]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.175000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff88b1e4d0 a2=0 a3=0 items=0 ppid=1864 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 00:40:40.186000 audit[1978]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.186000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0be73110 a2=0 a3=0 items=0 ppid=1864 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 00:40:40.199000 audit[1980]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.199000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5bca3c10 a2=0 a3=0 items=0 ppid=1864 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 00:40:40.217000 audit[1982]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.217000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5fe14a70 a2=0 a3=0 items=0 ppid=1864 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.217000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 00:40:40.230000 audit[1984]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.230000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd31458b0 a2=0 a3=0 items=0 ppid=1864 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:40:40.242000 audit[1986]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.242000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe660ab640 a2=0 a3=0 items=0 ppid=1864 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:40:40.255000 audit[1988]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.255000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffca78ac1c0 a2=0 a3=0 items=0 ppid=1864 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.255000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 00:40:40.271000 audit[1990]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.271000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe054a7e60 a2=0 a3=0 items=0 ppid=1864 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 00:40:40.283000 audit[1992]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.283000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdeb4e43d0 a2=0 a3=0 items=0 ppid=1864 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 00:40:40.298000 audit[1994]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.298000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffea1833c50 a2=0 a3=0 items=0 ppid=1864 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 00:40:40.310000 audit[1996]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.310000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffe15c2a70 a2=0 a3=0 items=0 ppid=1864 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 00:40:40.321000 audit[1998]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.321000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffce4468060 a2=0 a3=0 items=0 ppid=1864 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 00:40:40.353000 audit[2003]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.353000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb9cff360 a2=0 a3=0 items=0 ppid=1864 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:40:40.368000 audit[2005]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.368000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff6ffa2f80 a2=0 a3=0 items=0 ppid=1864 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:40:40.382000 audit[2007]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.382000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc15eae340 a2=0 a3=0 items=0 ppid=1864 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:40:40.394000 audit[2009]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.394000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff28352e30 a2=0 a3=0 items=0 ppid=1864 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.394000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 00:40:40.406000 audit[2011]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.406000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd41dd2e00 a2=0 a3=0 items=0 ppid=1864 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 00:40:40.418000 audit[2013]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:40:40.418000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3606a110 a2=0 a3=0 items=0 ppid=1864 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 00:40:40.473000 audit[2018]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.473000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe14568b20 a2=0 a3=0 items=0 ppid=1864 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 00:40:40.488000 audit[2020]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.488000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff1b499e30 a2=0 a3=0 items=0 ppid=1864 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 00:40:40.541000 audit[2028]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.541000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd898bcf60 a2=0 a3=0 items=0 ppid=1864 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.541000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 00:40:40.585000 audit[2034]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.585000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd54d5a530 a2=0 a3=0 items=0 ppid=1864 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 00:40:40.599000 audit[2036]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.599000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd8b40b100 a2=0 a3=0 items=0 ppid=1864 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 00:40:40.611000 audit[2038]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.611000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc88b194e0 a2=0 a3=0 items=0 ppid=1864 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 00:40:40.623000 audit[2040]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.623000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd99450d40 a2=0 a3=0 items=0 ppid=1864 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 00:40:40.635000 audit[2042]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:40:40.635000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe9fb2c500 a2=0 a3=0 items=0 ppid=1864 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:40:40.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 00:40:40.637386 systemd-networkd[1536]: docker0: Link UP Jan 15 00:40:40.648181 dockerd[1864]: time="2026-01-15T00:40:40.647670350Z" level=info msg="Loading containers: done." Jan 15 00:40:40.684477 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1409905910-merged.mount: Deactivated successfully. Jan 15 00:40:40.700506 dockerd[1864]: time="2026-01-15T00:40:40.700149719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 00:40:40.700506 dockerd[1864]: time="2026-01-15T00:40:40.700295010Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 00:40:40.700506 dockerd[1864]: time="2026-01-15T00:40:40.700387873Z" level=info msg="Initializing buildkit" Jan 15 00:40:40.805183 dockerd[1864]: time="2026-01-15T00:40:40.804392830Z" level=info msg="Completed buildkit initialization" Jan 15 00:40:40.812673 dockerd[1864]: time="2026-01-15T00:40:40.812514651Z" level=info msg="Daemon has completed initialization" Jan 15 00:40:40.813173 dockerd[1864]: time="2026-01-15T00:40:40.813067135Z" level=info msg="API listen on /run/docker.sock" Jan 15 00:40:40.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:40.813517 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 00:40:42.293453 containerd[1637]: time="2026-01-15T00:40:42.293214746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 00:40:43.164167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597507540.mount: Deactivated successfully. Jan 15 00:40:45.897312 containerd[1637]: time="2026-01-15T00:40:45.897015805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:45.898106 containerd[1637]: time="2026-01-15T00:40:45.897965717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 15 00:40:45.901698 containerd[1637]: time="2026-01-15T00:40:45.901540993Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:45.908309 containerd[1637]: time="2026-01-15T00:40:45.908140386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:45.910088 containerd[1637]: time="2026-01-15T00:40:45.909684472Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.616427366s" Jan 15 00:40:45.910149 containerd[1637]: time="2026-01-15T00:40:45.910104426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 15 00:40:45.912557 containerd[1637]: time="2026-01-15T00:40:45.912400353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 00:40:46.367613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 00:40:46.371253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:40:47.231327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:40:47.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:47.240045 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 15 00:40:47.240123 kernel: audit: type=1130 audit(1768437647.231:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:47.276358 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:40:47.431702 kubelet[2156]: E0115 00:40:47.431279 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:40:47.439258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:40:47.439566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:40:47.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:40:47.441520 systemd[1]: kubelet.service: Consumed 867ms CPU time, 109.9M memory peak. Jan 15 00:40:47.464378 kernel: audit: type=1131 audit(1768437647.440:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:40:48.782560 containerd[1637]: time="2026-01-15T00:40:48.782304026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:48.785069 containerd[1637]: time="2026-01-15T00:40:48.785014107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 15 00:40:48.788605 containerd[1637]: time="2026-01-15T00:40:48.788490544Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:48.796204 containerd[1637]: time="2026-01-15T00:40:48.796042436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:48.796650 containerd[1637]: time="2026-01-15T00:40:48.796573111Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.8841416s" Jan 15 00:40:48.796650 containerd[1637]: time="2026-01-15T00:40:48.796603247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 15 00:40:48.799926 containerd[1637]: time="2026-01-15T00:40:48.799633476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 00:40:51.249632 containerd[1637]: time="2026-01-15T00:40:51.248487691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:51.252657 containerd[1637]: time="2026-01-15T00:40:51.251620174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 15 00:40:51.255438 containerd[1637]: time="2026-01-15T00:40:51.255408708Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:51.261937 containerd[1637]: time="2026-01-15T00:40:51.261665166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:51.263249 containerd[1637]: time="2026-01-15T00:40:51.263214910Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.463164385s" Jan 15 00:40:51.264201 containerd[1637]: time="2026-01-15T00:40:51.263344712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 15 00:40:51.265574 containerd[1637]: time="2026-01-15T00:40:51.265220234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 00:40:52.923310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675626735.mount: Deactivated successfully. Jan 15 00:40:55.952091 containerd[1637]: time="2026-01-15T00:40:55.950629654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:55.954993 containerd[1637]: time="2026-01-15T00:40:55.954649054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 15 00:40:55.958234 containerd[1637]: time="2026-01-15T00:40:55.957698693Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:55.967527 containerd[1637]: time="2026-01-15T00:40:55.967493854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:55.968164 containerd[1637]: time="2026-01-15T00:40:55.968137352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.702789529s" Jan 15 00:40:55.969325 containerd[1637]: time="2026-01-15T00:40:55.969183104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 15 00:40:55.980061 containerd[1637]: time="2026-01-15T00:40:55.979574902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 00:40:56.732499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583904485.mount: Deactivated successfully. Jan 15 00:40:57.542269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 00:40:57.547078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:40:58.058323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:40:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:58.077017 kernel: audit: type=1130 audit(1768437658.057:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:40:58.082509 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 00:40:58.231115 kubelet[2236]: E0115 00:40:58.230946 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 00:40:58.236617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 00:40:58.237161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 00:40:58.238562 systemd[1]: kubelet.service: Consumed 598ms CPU time, 112.7M memory peak. Jan 15 00:40:58.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:40:58.262362 kernel: audit: type=1131 audit(1768437658.237:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:40:58.691879 containerd[1637]: time="2026-01-15T00:40:58.691389288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:58.695377 containerd[1637]: time="2026-01-15T00:40:58.695176900Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570412" Jan 15 00:40:58.698074 containerd[1637]: time="2026-01-15T00:40:58.697665732Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:58.702431 containerd[1637]: time="2026-01-15T00:40:58.702388509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:40:58.703735 containerd[1637]: time="2026-01-15T00:40:58.703679858Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.723996714s" Jan 15 00:40:58.704008 containerd[1637]: time="2026-01-15T00:40:58.703944597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 15 00:40:58.705392 containerd[1637]: time="2026-01-15T00:40:58.705352760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 00:40:59.161553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014692167.mount: Deactivated successfully. Jan 15 00:40:59.178015 containerd[1637]: time="2026-01-15T00:40:59.177516172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:40:59.180919 containerd[1637]: time="2026-01-15T00:40:59.180630791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 00:40:59.183397 containerd[1637]: time="2026-01-15T00:40:59.183218540Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:40:59.188962 containerd[1637]: time="2026-01-15T00:40:59.188671453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 00:40:59.190287 containerd[1637]: time="2026-01-15T00:40:59.190109389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 484.722717ms" Jan 15 00:40:59.190287 containerd[1637]: time="2026-01-15T00:40:59.190209976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 00:40:59.191375 containerd[1637]: time="2026-01-15T00:40:59.191202814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 00:40:59.716704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226853454.mount: Deactivated successfully. Jan 15 00:41:03.037866 containerd[1637]: time="2026-01-15T00:41:03.037525955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:03.039401 containerd[1637]: time="2026-01-15T00:41:03.039352302Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 15 00:41:03.042301 containerd[1637]: time="2026-01-15T00:41:03.042105053Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:03.050451 containerd[1637]: time="2026-01-15T00:41:03.050260763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:03.056707 containerd[1637]: time="2026-01-15T00:41:03.056378320Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.865068799s" Jan 15 00:41:03.056707 containerd[1637]: time="2026-01-15T00:41:03.056584422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 15 00:41:06.061842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:41:06.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:06.062216 systemd[1]: kubelet.service: Consumed 598ms CPU time, 112.7M memory peak. Jan 15 00:41:06.065901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:41:06.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:06.101664 kernel: audit: type=1130 audit(1768437666.061:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:06.101969 kernel: audit: type=1131 audit(1768437666.061:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:06.126360 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-7.scope)... Jan 15 00:41:06.126454 systemd[1]: Reloading... Jan 15 00:41:06.276038 zram_generator::config[2382]: No configuration found. Jan 15 00:41:06.611491 systemd[1]: Reloading finished in 484 ms. Jan 15 00:41:06.649000 audit: BPF prog-id=61 op=LOAD Jan 15 00:41:06.649000 audit: BPF prog-id=57 op=UNLOAD Jan 15 00:41:06.650000 audit: BPF prog-id=62 op=LOAD Jan 15 00:41:06.650000 audit: BPF prog-id=56 op=UNLOAD Jan 15 00:41:06.664132 kernel: audit: type=1334 audit(1768437666.649:285): prog-id=61 op=LOAD Jan 15 00:41:06.664195 kernel: audit: type=1334 audit(1768437666.649:286): prog-id=57 op=UNLOAD Jan 15 00:41:06.664244 kernel: audit: type=1334 audit(1768437666.650:287): prog-id=62 op=LOAD Jan 15 00:41:06.664280 kernel: audit: type=1334 audit(1768437666.650:288): prog-id=56 op=UNLOAD Jan 15 00:41:06.650000 audit: BPF prog-id=63 op=LOAD Jan 15 00:41:06.688947 kernel: audit: type=1334 audit(1768437666.650:289): prog-id=63 op=LOAD Jan 15 00:41:06.651000 audit: BPF prog-id=44 op=UNLOAD Jan 15 00:41:06.695009 kernel: audit: type=1334 audit(1768437666.651:290): prog-id=44 op=UNLOAD Jan 15 00:41:06.651000 audit: BPF prog-id=64 op=LOAD Jan 15 00:41:06.702313 kernel: audit: type=1334 audit(1768437666.651:291): prog-id=64 op=LOAD Jan 15 00:41:06.651000 audit: BPF prog-id=65 op=LOAD Jan 15 00:41:06.709492 kernel: audit: type=1334 audit(1768437666.651:292): prog-id=65 op=LOAD Jan 15 00:41:06.651000 audit: BPF prog-id=45 op=UNLOAD Jan 15 00:41:06.651000 audit: BPF prog-id=46 op=UNLOAD Jan 15 00:41:06.655000 audit: BPF prog-id=66 op=LOAD Jan 15 00:41:06.655000 audit: BPF prog-id=58 op=UNLOAD Jan 15 00:41:06.655000 audit: BPF prog-id=67 op=LOAD Jan 15 00:41:06.655000 audit: BPF prog-id=68 op=LOAD Jan 15 00:41:06.655000 audit: BPF prog-id=59 op=UNLOAD Jan 15 00:41:06.655000 audit: BPF prog-id=60 op=UNLOAD Jan 15 00:41:06.657000 audit: BPF prog-id=69 op=LOAD Jan 15 00:41:06.657000 audit: BPF prog-id=50 op=UNLOAD Jan 15 00:41:06.661000 audit: BPF prog-id=70 op=LOAD Jan 15 00:41:06.661000 audit: BPF prog-id=41 op=UNLOAD Jan 15 00:41:06.661000 audit: BPF prog-id=71 op=LOAD Jan 15 00:41:06.661000 audit: BPF prog-id=72 op=LOAD Jan 15 00:41:06.661000 audit: BPF prog-id=42 op=UNLOAD Jan 15 00:41:06.661000 audit: BPF prog-id=43 op=UNLOAD Jan 15 00:41:06.664000 audit: BPF prog-id=73 op=LOAD Jan 15 00:41:06.718000 audit: BPF prog-id=51 op=UNLOAD Jan 15 00:41:06.719000 audit: BPF prog-id=74 op=LOAD Jan 15 00:41:06.719000 audit: BPF prog-id=75 op=LOAD Jan 15 00:41:06.719000 audit: BPF prog-id=52 op=UNLOAD Jan 15 00:41:06.719000 audit: BPF prog-id=53 op=UNLOAD Jan 15 00:41:06.720000 audit: BPF prog-id=76 op=LOAD Jan 15 00:41:06.720000 audit: BPF prog-id=77 op=LOAD Jan 15 00:41:06.720000 audit: BPF prog-id=54 op=UNLOAD Jan 15 00:41:06.720000 audit: BPF prog-id=55 op=UNLOAD Jan 15 00:41:06.721000 audit: BPF prog-id=78 op=LOAD Jan 15 00:41:06.721000 audit: BPF prog-id=47 op=UNLOAD Jan 15 00:41:06.722000 audit: BPF prog-id=79 op=LOAD Jan 15 00:41:06.722000 audit: BPF prog-id=80 op=LOAD Jan 15 00:41:06.722000 audit: BPF prog-id=48 op=UNLOAD Jan 15 00:41:06.722000 audit: BPF prog-id=49 op=UNLOAD Jan 15 00:41:06.756912 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 00:41:06.757106 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 00:41:06.757656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:41:06.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 00:41:06.758228 systemd[1]: kubelet.service: Consumed 218ms CPU time, 98.4M memory peak. Jan 15 00:41:06.761240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:41:07.086144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:41:07.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:07.109616 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:41:08.229455 kubelet[2427]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:41:08.229455 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:41:08.229455 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:41:08.230653 kubelet[2427]: I0115 00:41:08.229440 2427 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:41:09.354033 kubelet[2427]: I0115 00:41:09.353646 2427 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:41:09.355521 kubelet[2427]: I0115 00:41:09.355237 2427 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:41:09.356456 kubelet[2427]: I0115 00:41:09.356259 2427 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:41:09.419970 kubelet[2427]: E0115 00:41:09.419599 2427 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:09.427380 kubelet[2427]: I0115 00:41:09.427265 2427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:41:09.452584 kubelet[2427]: I0115 00:41:09.452480 2427 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:41:09.468548 kubelet[2427]: I0115 00:41:09.468302 2427 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:41:09.469171 kubelet[2427]: I0115 00:41:09.468996 2427 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:41:09.470026 kubelet[2427]: I0115 00:41:09.469096 2427 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:41:09.470026 kubelet[2427]: I0115 00:41:09.469976 2427 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:41:09.470026 kubelet[2427]: I0115 00:41:09.469987 2427 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:41:09.470384 kubelet[2427]: I0115 00:41:09.470226 2427 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:41:09.482046 kubelet[2427]: I0115 00:41:09.481848 2427 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:41:09.482046 kubelet[2427]: I0115 00:41:09.482015 2427 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:41:09.482332 kubelet[2427]: I0115 00:41:09.482223 2427 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:41:09.482332 kubelet[2427]: I0115 00:41:09.482318 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:41:09.486867 kubelet[2427]: W0115 00:41:09.486075 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:09.486867 kubelet[2427]: E0115 00:41:09.486370 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:09.490974 kubelet[2427]: W0115 00:41:09.490519 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:09.493903 kubelet[2427]: E0115 00:41:09.490639 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:09.495387 kubelet[2427]: I0115 00:41:09.495288 2427 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:41:09.495967 kubelet[2427]: I0115 00:41:09.495949 2427 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:41:09.498963 kubelet[2427]: W0115 00:41:09.498938 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 00:41:09.515270 kubelet[2427]: I0115 00:41:09.515046 2427 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:41:09.515270 kubelet[2427]: I0115 00:41:09.515093 2427 server.go:1287] "Started kubelet" Jan 15 00:41:09.515501 kubelet[2427]: I0115 00:41:09.515464 2427 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:41:09.516593 kubelet[2427]: I0115 00:41:09.516425 2427 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:41:09.707443 kubelet[2427]: I0115 00:41:09.707317 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:41:09.717610 kubelet[2427]: I0115 00:41:09.713335 2427 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:41:09.721435 kubelet[2427]: I0115 00:41:09.714112 2427 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:41:09.727525 kubelet[2427]: I0115 00:41:09.716155 2427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:41:09.728589 kubelet[2427]: E0115 00:41:09.726033 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ac0b0dadd921c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,LastTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:41:09.729896 kubelet[2427]: E0115 00:41:09.729868 2427 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:41:09.732892 kubelet[2427]: I0115 00:41:09.732710 2427 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:41:09.733236 kubelet[2427]: I0115 00:41:09.733152 2427 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:41:09.734261 kubelet[2427]: I0115 00:41:09.730146 2427 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:41:09.734340 kubelet[2427]: W0115 00:41:09.734227 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:09.734418 kubelet[2427]: E0115 00:41:09.734403 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:09.734466 kubelet[2427]: E0115 00:41:09.730218 2427 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:41:09.734507 kubelet[2427]: E0115 00:41:09.734407 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Jan 15 00:41:09.735344 kubelet[2427]: I0115 00:41:09.730136 2427 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:41:09.735559 kubelet[2427]: I0115 00:41:09.735470 2427 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:41:09.739619 kubelet[2427]: I0115 00:41:09.739534 2427 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:41:09.761000 audit[2443]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.761000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc71d4d1d0 a2=0 a3=0 items=0 ppid=2427 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:41:09.766000 audit[2444]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.766000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd9e33de0 a2=0 a3=0 items=0 ppid=2427 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:41:09.778000 audit[2448]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.778000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffca0608170 a2=0 a3=0 items=0 ppid=2427 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:41:09.784433 kubelet[2427]: I0115 00:41:09.784301 2427 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:41:09.784433 kubelet[2427]: I0115 00:41:09.784414 2427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:41:09.784553 kubelet[2427]: I0115 00:41:09.784441 2427 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:41:09.792000 audit[2450]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.792000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffea0981d0 a2=0 a3=0 items=0 ppid=2427 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:41:09.834109 kubelet[2427]: I0115 00:41:09.833973 2427 policy_none.go:49] "None policy: Start" Jan 15 00:41:09.834109 kubelet[2427]: I0115 00:41:09.834015 2427 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:41:09.834109 kubelet[2427]: I0115 00:41:09.834040 2427 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:41:09.836014 kubelet[2427]: E0115 00:41:09.835976 2427 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:41:09.856000 audit[2453]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.856000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff63a92030 a2=0 a3=0 items=0 ppid=2427 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 15 00:41:09.858371 kubelet[2427]: I0115 00:41:09.858211 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:41:09.862553 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 00:41:09.865000 audit[2455]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:09.865000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd73976b00 a2=0 a3=0 items=0 ppid=2427 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 00:41:09.867283 kubelet[2427]: I0115 00:41:09.867196 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:41:09.867461 kubelet[2427]: I0115 00:41:09.867380 2427 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:41:09.867507 kubelet[2427]: I0115 00:41:09.867486 2427 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:41:09.867507 kubelet[2427]: I0115 00:41:09.867502 2427 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:41:09.867949 kubelet[2427]: E0115 00:41:09.867584 2427 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:41:09.870000 audit[2456]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.870000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5c1d5970 a2=0 a3=0 items=0 ppid=2427 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:41:09.872525 kubelet[2427]: W0115 00:41:09.872436 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:09.872525 kubelet[2427]: E0115 00:41:09.872496 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:09.872000 audit[2457]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:09.872000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4a458520 a2=0 a3=0 items=0 ppid=2427 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 00:41:09.875000 audit[2458]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.875000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaa17f0e0 a2=0 a3=0 items=0 ppid=2427 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:41:09.880000 audit[2459]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:09.880000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00872110 a2=0 a3=0 items=0 ppid=2427 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 00:41:09.881000 audit[2460]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:09.881000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5511bdf0 a2=0 a3=0 items=0 ppid=2427 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:41:09.886088 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 00:41:09.890000 audit[2461]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:09.890000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe645b5fc0 a2=0 a3=0 items=0 ppid=2427 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:09.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 00:41:09.895566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 00:41:09.919206 kubelet[2427]: I0115 00:41:09.919025 2427 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:41:09.920508 kubelet[2427]: I0115 00:41:09.920340 2427 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:41:09.920508 kubelet[2427]: I0115 00:41:09.920430 2427 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:41:09.928941 kubelet[2427]: I0115 00:41:09.928627 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:41:09.935420 kubelet[2427]: E0115 00:41:09.935273 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Jan 15 00:41:09.950253 kubelet[2427]: E0115 00:41:09.950097 2427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:41:09.950253 kubelet[2427]: E0115 00:41:09.950249 2427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 00:41:10.024282 kubelet[2427]: I0115 00:41:10.023104 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:10.026069 kubelet[2427]: E0115 00:41:10.025177 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 15 00:41:10.040541 kubelet[2427]: I0115 00:41:10.040274 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:10.040541 kubelet[2427]: I0115 00:41:10.040418 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:10.040541 kubelet[2427]: I0115 00:41:10.040462 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:10.040978 kubelet[2427]: I0115 00:41:10.040495 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:10.041183 kubelet[2427]: I0115 00:41:10.040946 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:10.041183 kubelet[2427]: I0115 00:41:10.041070 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:10.041183 kubelet[2427]: I0115 00:41:10.041100 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:10.041183 kubelet[2427]: I0115 00:41:10.041130 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:10.041323 kubelet[2427]: I0115 00:41:10.041155 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:10.042516 systemd[1]: Created slice kubepods-burstable-pode5f65ae989b3e60bb742bb5216e6ad8d.slice - libcontainer container kubepods-burstable-pode5f65ae989b3e60bb742bb5216e6ad8d.slice. Jan 15 00:41:10.066205 kubelet[2427]: E0115 00:41:10.065994 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:10.074380 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 15 00:41:10.080347 kubelet[2427]: E0115 00:41:10.080222 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:10.087568 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 15 00:41:10.092552 kubelet[2427]: E0115 00:41:10.092268 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:10.228964 kubelet[2427]: I0115 00:41:10.228600 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:10.230266 kubelet[2427]: E0115 00:41:10.229990 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 15 00:41:10.338565 kubelet[2427]: E0115 00:41:10.338115 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Jan 15 00:41:10.369257 kubelet[2427]: E0115 00:41:10.368902 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:10.374031 containerd[1637]: time="2026-01-15T00:41:10.373588995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5f65ae989b3e60bb742bb5216e6ad8d,Namespace:kube-system,Attempt:0,}" Jan 15 00:41:10.382410 kubelet[2427]: E0115 00:41:10.382102 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:10.384850 containerd[1637]: time="2026-01-15T00:41:10.383365232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 15 00:41:10.393508 kubelet[2427]: E0115 00:41:10.393423 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:10.394528 containerd[1637]: time="2026-01-15T00:41:10.394380512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 15 00:41:10.407449 kubelet[2427]: W0115 00:41:10.407042 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:10.407449 kubelet[2427]: E0115 00:41:10.407202 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:10.529925 containerd[1637]: time="2026-01-15T00:41:10.529481795Z" level=info msg="connecting to shim c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7" address="unix:///run/containerd/s/505a49be215df6c309e2b855983f1241957b886ced2959844fbd2fb37e266755" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:10.538194 containerd[1637]: time="2026-01-15T00:41:10.538163483Z" level=info msg="connecting to shim dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7" address="unix:///run/containerd/s/7949679d077ed8b657fe600c9106337189f8727e92d7501260ef22f29af39723" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:10.561968 kubelet[2427]: W0115 00:41:10.561825 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:10.561968 kubelet[2427]: E0115 00:41:10.561961 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:11.238593 containerd[1637]: time="2026-01-15T00:41:11.238171129Z" level=info msg="connecting to shim ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3" address="unix:///run/containerd/s/9bbeb371c245db08a76ec7afe4568e799db69ee674ed822f2dba9e57ad41b9c3" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:11.324456 kubelet[2427]: E0115 00:41:11.322026 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Jan 15 00:41:11.324456 kubelet[2427]: W0115 00:41:11.322864 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:11.324456 kubelet[2427]: E0115 00:41:11.322941 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:11.324456 kubelet[2427]: W0115 00:41:11.323306 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:11.324456 kubelet[2427]: E0115 00:41:11.323436 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:11.329487 kubelet[2427]: I0115 00:41:11.329006 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:11.329487 kubelet[2427]: E0115 00:41:11.329439 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 15 00:41:11.378141 systemd[1]: Started cri-containerd-c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7.scope - libcontainer container c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7. Jan 15 00:41:11.462833 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 15 00:41:11.462927 kernel: audit: type=1334 audit(1768437671.454:339): prog-id=81 op=LOAD Jan 15 00:41:11.454000 audit: BPF prog-id=81 op=LOAD Jan 15 00:41:11.476921 kernel: audit: type=1334 audit(1768437671.458:340): prog-id=82 op=LOAD Jan 15 00:41:11.458000 audit: BPF prog-id=82 op=LOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.508931 kubelet[2427]: E0115 00:41:11.507326 2427 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:11.511327 kernel: audit: type=1300 audit(1768437671.458:340): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.511447 kernel: audit: type=1327 audit(1768437671.458:340): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=82 op=UNLOAD Jan 15 00:41:11.545050 kernel: audit: type=1334 audit(1768437671.458:341): prog-id=82 op=UNLOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.617627 kernel: audit: type=1300 audit(1768437671.458:341): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.623386 kernel: audit: type=1327 audit(1768437671.458:341): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=83 op=LOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.667245 kernel: audit: type=1334 audit(1768437671.458:342): prog-id=83 op=LOAD Jan 15 00:41:11.667431 kernel: audit: type=1300 audit(1768437671.458:342): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.667543 kernel: audit: type=1327 audit(1768437671.458:342): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=84 op=LOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=84 op=UNLOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=83 op=UNLOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.458000 audit: BPF prog-id=85 op=LOAD Jan 15 00:41:11.458000 audit[2502]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2479 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336323262306133326139633866306637346263343766653737383864 Jan 15 00:41:11.852279 systemd[1]: Started cri-containerd-ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3.scope - libcontainer container ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3. Jan 15 00:41:11.866048 systemd[1]: Started cri-containerd-dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7.scope - libcontainer container dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7. Jan 15 00:41:11.950000 audit: BPF prog-id=86 op=LOAD Jan 15 00:41:11.950000 audit: BPF prog-id=87 op=LOAD Jan 15 00:41:11.950000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f0238 a2=98 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=87 op=UNLOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=88 op=LOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f0488 a2=98 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=89 op=LOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f0218 a2=98 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=89 op=UNLOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=88 op=UNLOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.951000 audit: BPF prog-id=90 op=LOAD Jan 15 00:41:11.951000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f06e8 a2=98 a3=0 items=0 ppid=2486 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465633462343663616536336139626564356339613866336337333632 Jan 15 00:41:11.992000 audit: BPF prog-id=91 op=LOAD Jan 15 00:41:11.994000 audit: BPF prog-id=92 op=LOAD Jan 15 00:41:11.994000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000152238 a2=98 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.994000 audit: BPF prog-id=92 op=UNLOAD Jan 15 00:41:11.994000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.994000 audit: BPF prog-id=93 op=LOAD Jan 15 00:41:11.994000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000152488 a2=98 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.995000 audit: BPF prog-id=94 op=LOAD Jan 15 00:41:11.995000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000152218 a2=98 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.995000 audit: BPF prog-id=94 op=UNLOAD Jan 15 00:41:11.995000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.995000 audit: BPF prog-id=93 op=UNLOAD Jan 15 00:41:11.995000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:11.995000 audit: BPF prog-id=95 op=LOAD Jan 15 00:41:11.995000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001526e8 a2=98 a3=0 items=0 ppid=2512 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:11.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564306138633239313736653539626137363034363639383061333935 Jan 15 00:41:12.011516 containerd[1637]: time="2026-01-15T00:41:12.011233078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7\"" Jan 15 00:41:12.017522 kubelet[2427]: E0115 00:41:12.017455 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:12.029589 containerd[1637]: time="2026-01-15T00:41:12.029525121Z" level=info msg="CreateContainer within sandbox \"c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 00:41:12.212988 kubelet[2427]: I0115 00:41:12.195980 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:12.212988 kubelet[2427]: E0115 00:41:12.197408 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jan 15 00:41:12.256331 containerd[1637]: time="2026-01-15T00:41:12.256239910Z" level=info msg="Container aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:12.273961 containerd[1637]: time="2026-01-15T00:41:12.273541482Z" level=info msg="CreateContainer within sandbox \"c622b0a32a9c8f0f74bc47fe7788d9deb2c94b01c8d033cd463417bc5f8dd6b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3\"" Jan 15 00:41:12.275149 containerd[1637]: time="2026-01-15T00:41:12.275053584Z" level=info msg="StartContainer for \"aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3\"" Jan 15 00:41:12.277853 containerd[1637]: time="2026-01-15T00:41:12.277514130Z" level=info msg="connecting to shim aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3" address="unix:///run/containerd/s/505a49be215df6c309e2b855983f1241957b886ced2959844fbd2fb37e266755" protocol=ttrpc version=3 Jan 15 00:41:12.343880 containerd[1637]: time="2026-01-15T00:41:12.343369126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5f65ae989b3e60bb742bb5216e6ad8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7\"" Jan 15 00:41:12.345703 kubelet[2427]: E0115 00:41:12.345225 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:12.349012 containerd[1637]: time="2026-01-15T00:41:12.348900553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3\"" Jan 15 00:41:12.350202 containerd[1637]: time="2026-01-15T00:41:12.350092630Z" level=info msg="CreateContainer within sandbox \"dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 00:41:12.356273 kubelet[2427]: E0115 00:41:12.355901 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:12.362906 containerd[1637]: time="2026-01-15T00:41:12.362868186Z" level=info msg="CreateContainer within sandbox \"ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 00:41:12.370268 containerd[1637]: time="2026-01-15T00:41:12.370242194Z" level=info msg="Container 88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:12.395954 containerd[1637]: time="2026-01-15T00:41:12.395917006Z" level=info msg="CreateContainer within sandbox \"dec4b46cae63a9bed5c9a8f3c7362ed9f28a3b43cc4470a6432505e0b76ac9f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487\"" Jan 15 00:41:12.398841 containerd[1637]: time="2026-01-15T00:41:12.397379240Z" level=info msg="Container b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:12.400837 containerd[1637]: time="2026-01-15T00:41:12.400295383Z" level=info msg="StartContainer for \"88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487\"" Jan 15 00:41:12.402352 containerd[1637]: time="2026-01-15T00:41:12.402328513Z" level=info msg="connecting to shim 88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487" address="unix:///run/containerd/s/7949679d077ed8b657fe600c9106337189f8727e92d7501260ef22f29af39723" protocol=ttrpc version=3 Jan 15 00:41:12.411077 systemd[1]: Started cri-containerd-aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3.scope - libcontainer container aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3. Jan 15 00:41:12.414460 containerd[1637]: time="2026-01-15T00:41:12.414434779Z" level=info msg="CreateContainer within sandbox \"ed0a8c29176e59ba760466980a395995a2272d7b9f11f7946a270ac508fbc3e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5\"" Jan 15 00:41:12.416704 containerd[1637]: time="2026-01-15T00:41:12.416323414Z" level=info msg="StartContainer for \"b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5\"" Jan 15 00:41:12.417568 containerd[1637]: time="2026-01-15T00:41:12.417429222Z" level=info msg="connecting to shim b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5" address="unix:///run/containerd/s/9bbeb371c245db08a76ec7afe4568e799db69ee674ed822f2dba9e57ad41b9c3" protocol=ttrpc version=3 Jan 15 00:41:12.824112 kubelet[2427]: E0115 00:41:12.823174 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ac0b0dadd921c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,LastTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:41:12.882164 systemd[1]: Started cri-containerd-88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487.scope - libcontainer container 88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487. Jan 15 00:41:12.904000 audit: BPF prog-id=96 op=LOAD Jan 15 00:41:12.913119 systemd[1]: Started cri-containerd-b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5.scope - libcontainer container b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5. Jan 15 00:41:12.916000 audit: BPF prog-id=97 op=LOAD Jan 15 00:41:12.916000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.916000 audit: BPF prog-id=97 op=UNLOAD Jan 15 00:41:12.916000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.917000 audit: BPF prog-id=98 op=LOAD Jan 15 00:41:12.917000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.917000 audit: BPF prog-id=99 op=LOAD Jan 15 00:41:12.917000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.918000 audit: BPF prog-id=99 op=UNLOAD Jan 15 00:41:12.918000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.918000 audit: BPF prog-id=98 op=UNLOAD Jan 15 00:41:12.918000 audit[2588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.918000 audit: BPF prog-id=100 op=LOAD Jan 15 00:41:12.918000 audit[2588]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2479 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:12.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161363437366666366231626162653762646331383134633934383037 Jan 15 00:41:12.924881 kubelet[2427]: E0115 00:41:12.924836 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="3.2s" Jan 15 00:41:12.988000 audit: BPF prog-id=101 op=LOAD Jan 15 00:41:13.005000 audit: BPF prog-id=102 op=LOAD Jan 15 00:41:13.005000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0238 a2=98 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.006000 audit: BPF prog-id=102 op=UNLOAD Jan 15 00:41:13.006000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.009000 audit: BPF prog-id=103 op=LOAD Jan 15 00:41:13.009000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e0488 a2=98 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.009000 audit: BPF prog-id=104 op=LOAD Jan 15 00:41:13.009000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001e0218 a2=98 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.009000 audit: BPF prog-id=104 op=UNLOAD Jan 15 00:41:13.009000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.009000 audit: BPF prog-id=103 op=UNLOAD Jan 15 00:41:13.009000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.009000 audit: BPF prog-id=105 op=LOAD Jan 15 00:41:13.009000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001e06e8 a2=98 a3=0 items=0 ppid=2486 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838643634663262616336643231623034633637346262316235393439 Jan 15 00:41:13.023000 audit: BPF prog-id=106 op=LOAD Jan 15 00:41:13.181030 kubelet[2427]: W0115 00:41:13.163531 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:13.181030 kubelet[2427]: E0115 00:41:13.167050 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:13.300255 kubelet[2427]: W0115 00:41:13.300077 2427 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jan 15 00:41:13.300255 kubelet[2427]: E0115 00:41:13.300236 2427 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jan 15 00:41:13.369000 audit: BPF prog-id=107 op=LOAD Jan 15 00:41:13.369000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.369000 audit: BPF prog-id=107 op=UNLOAD Jan 15 00:41:13.369000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.369000 audit: BPF prog-id=108 op=LOAD Jan 15 00:41:13.369000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.369000 audit: BPF prog-id=109 op=LOAD Jan 15 00:41:13.369000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.370000 audit: BPF prog-id=109 op=UNLOAD Jan 15 00:41:13.370000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.370000 audit: BPF prog-id=108 op=UNLOAD Jan 15 00:41:13.370000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.370000 audit: BPF prog-id=110 op=LOAD Jan 15 00:41:13.370000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2512 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:13.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234373536636439363934653336653131663335346536646464386563 Jan 15 00:41:13.441355 containerd[1637]: time="2026-01-15T00:41:13.441243216Z" level=info msg="StartContainer for \"aa6476ff6b1babe7bdc1814c948078e55c40938abbbd7d3430658f513c8518f3\" returns successfully" Jan 15 00:41:13.524854 containerd[1637]: time="2026-01-15T00:41:13.524452737Z" level=info msg="StartContainer for \"b4756cd9694e36e11f354e6ddd8ecc39f454ce197aa656542ac9654750a3edd5\" returns successfully" Jan 15 00:41:13.529044 containerd[1637]: time="2026-01-15T00:41:13.529023429Z" level=info msg="StartContainer for \"88d64f2bac6d21b04c674bb1b5949b8db64622332aaa305c313c0f3679b95487\" returns successfully" Jan 15 00:41:13.802909 kubelet[2427]: I0115 00:41:13.802876 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:14.383485 kubelet[2427]: E0115 00:41:14.383215 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:14.383485 kubelet[2427]: E0115 00:41:14.383424 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:14.390220 kubelet[2427]: E0115 00:41:14.390084 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:14.390220 kubelet[2427]: E0115 00:41:14.390177 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:14.393895 kubelet[2427]: E0115 00:41:14.393586 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:14.394175 kubelet[2427]: E0115 00:41:14.394089 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:14.977496 update_engine[1621]: I20260115 00:41:14.977056 1621 update_attempter.cc:509] Updating boot flags... Jan 15 00:41:15.406355 kubelet[2427]: E0115 00:41:15.406324 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:15.409377 kubelet[2427]: E0115 00:41:15.407317 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:15.420980 kubelet[2427]: E0115 00:41:15.420601 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:15.422862 kubelet[2427]: E0115 00:41:15.421363 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:15.425330 kubelet[2427]: E0115 00:41:15.425311 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:15.426934 kubelet[2427]: E0115 00:41:15.426604 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:16.552059 kubelet[2427]: E0115 00:41:16.552005 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:16.553105 kubelet[2427]: E0115 00:41:16.553066 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:16.554985 kubelet[2427]: E0115 00:41:16.554966 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:16.555364 kubelet[2427]: E0115 00:41:16.555312 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:17.950983 kubelet[2427]: E0115 00:41:17.950467 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:17.950983 kubelet[2427]: E0115 00:41:17.950914 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:18.169073 kubelet[2427]: E0115 00:41:18.167573 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:18.169073 kubelet[2427]: E0115 00:41:18.168111 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:19.950575 kubelet[2427]: E0115 00:41:19.950443 2427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 00:41:20.498061 kubelet[2427]: E0115 00:41:20.497373 2427 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 00:41:20.498061 kubelet[2427]: E0115 00:41:20.497709 2427 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:22.102999 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1008525760 wd_nsec: 1008525471 Jan 15 00:41:23.172125 kubelet[2427]: E0115 00:41:23.171503 2427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 15 00:41:23.293406 kubelet[2427]: E0115 00:41:23.293229 2427 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ac0b0dadd921c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,LastTimestamp:2026-01-15 00:41:09.515072028 +0000 UTC m=+2.396431147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:41:23.363510 kubelet[2427]: E0115 00:41:23.363361 2427 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ac0b0e7aad0c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 00:41:09.72984954 +0000 UTC m=+2.611208649,LastTimestamp:2026-01-15 00:41:09.72984954 +0000 UTC m=+2.611208649,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 00:41:23.365050 kubelet[2427]: I0115 00:41:23.365031 2427 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:41:23.433008 kubelet[2427]: I0115 00:41:23.431577 2427 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:23.503556 kubelet[2427]: E0115 00:41:23.503429 2427 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:23.503556 kubelet[2427]: I0115 00:41:23.503555 2427 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:23.533124 kubelet[2427]: E0115 00:41:23.532907 2427 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:23.533124 kubelet[2427]: I0115 00:41:23.533024 2427 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:23.551925 kubelet[2427]: E0115 00:41:23.551885 2427 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:23.577038 kubelet[2427]: I0115 00:41:23.576904 2427 apiserver.go:52] "Watching apiserver" Jan 15 00:41:23.635479 kubelet[2427]: I0115 00:41:23.635446 2427 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:41:26.612574 systemd[1]: Reload requested from client PID 2717 ('systemctl') (unit session-7.scope)... Jan 15 00:41:26.612707 systemd[1]: Reloading... Jan 15 00:41:26.815924 zram_generator::config[2765]: No configuration found. Jan 15 00:41:27.172966 systemd[1]: Reloading finished in 559 ms. Jan 15 00:41:27.231977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:41:27.232949 kubelet[2427]: I0115 00:41:27.232232 2427 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:41:27.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:27.260133 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 00:41:27.260682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:41:27.260967 systemd[1]: kubelet.service: Consumed 5.597s CPU time, 133.2M memory peak. Jan 15 00:41:27.267122 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 15 00:41:27.267230 kernel: audit: type=1131 audit(1768437687.260:387): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:27.268208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 00:41:27.301387 kernel: audit: type=1334 audit(1768437687.268:388): prog-id=111 op=LOAD Jan 15 00:41:27.301462 kernel: audit: type=1334 audit(1768437687.268:389): prog-id=73 op=UNLOAD Jan 15 00:41:27.268000 audit: BPF prog-id=111 op=LOAD Jan 15 00:41:27.268000 audit: BPF prog-id=73 op=UNLOAD Jan 15 00:41:27.268000 audit: BPF prog-id=112 op=LOAD Jan 15 00:41:27.313274 kernel: audit: type=1334 audit(1768437687.268:390): prog-id=112 op=LOAD Jan 15 00:41:27.313332 kernel: audit: type=1334 audit(1768437687.268:391): prog-id=113 op=LOAD Jan 15 00:41:27.268000 audit: BPF prog-id=113 op=LOAD Jan 15 00:41:27.320104 kernel: audit: type=1334 audit(1768437687.268:392): prog-id=74 op=UNLOAD Jan 15 00:41:27.268000 audit: BPF prog-id=74 op=UNLOAD Jan 15 00:41:27.268000 audit: BPF prog-id=75 op=UNLOAD Jan 15 00:41:27.332860 kernel: audit: type=1334 audit(1768437687.268:393): prog-id=75 op=UNLOAD Jan 15 00:41:27.332921 kernel: audit: type=1334 audit(1768437687.270:394): prog-id=114 op=LOAD Jan 15 00:41:27.270000 audit: BPF prog-id=114 op=LOAD Jan 15 00:41:27.270000 audit: BPF prog-id=115 op=LOAD Jan 15 00:41:27.344261 kernel: audit: type=1334 audit(1768437687.270:395): prog-id=115 op=LOAD Jan 15 00:41:27.344312 kernel: audit: type=1334 audit(1768437687.270:396): prog-id=76 op=UNLOAD Jan 15 00:41:27.270000 audit: BPF prog-id=76 op=UNLOAD Jan 15 00:41:27.270000 audit: BPF prog-id=77 op=UNLOAD Jan 15 00:41:27.272000 audit: BPF prog-id=116 op=LOAD Jan 15 00:41:27.272000 audit: BPF prog-id=61 op=UNLOAD Jan 15 00:41:27.273000 audit: BPF prog-id=117 op=LOAD Jan 15 00:41:27.273000 audit: BPF prog-id=62 op=UNLOAD Jan 15 00:41:27.276000 audit: BPF prog-id=118 op=LOAD Jan 15 00:41:27.276000 audit: BPF prog-id=66 op=UNLOAD Jan 15 00:41:27.276000 audit: BPF prog-id=119 op=LOAD Jan 15 00:41:27.276000 audit: BPF prog-id=120 op=LOAD Jan 15 00:41:27.276000 audit: BPF prog-id=67 op=UNLOAD Jan 15 00:41:27.276000 audit: BPF prog-id=68 op=UNLOAD Jan 15 00:41:27.277000 audit: BPF prog-id=121 op=LOAD Jan 15 00:41:27.277000 audit: BPF prog-id=78 op=UNLOAD Jan 15 00:41:27.277000 audit: BPF prog-id=122 op=LOAD Jan 15 00:41:27.277000 audit: BPF prog-id=123 op=LOAD Jan 15 00:41:27.277000 audit: BPF prog-id=79 op=UNLOAD Jan 15 00:41:27.277000 audit: BPF prog-id=80 op=UNLOAD Jan 15 00:41:27.281000 audit: BPF prog-id=124 op=LOAD Jan 15 00:41:27.281000 audit: BPF prog-id=69 op=UNLOAD Jan 15 00:41:27.282000 audit: BPF prog-id=125 op=LOAD Jan 15 00:41:27.282000 audit: BPF prog-id=63 op=UNLOAD Jan 15 00:41:27.282000 audit: BPF prog-id=126 op=LOAD Jan 15 00:41:27.282000 audit: BPF prog-id=127 op=LOAD Jan 15 00:41:27.282000 audit: BPF prog-id=64 op=UNLOAD Jan 15 00:41:27.282000 audit: BPF prog-id=65 op=UNLOAD Jan 15 00:41:27.284000 audit: BPF prog-id=128 op=LOAD Jan 15 00:41:27.284000 audit: BPF prog-id=70 op=UNLOAD Jan 15 00:41:27.284000 audit: BPF prog-id=129 op=LOAD Jan 15 00:41:27.284000 audit: BPF prog-id=130 op=LOAD Jan 15 00:41:27.284000 audit: BPF prog-id=71 op=UNLOAD Jan 15 00:41:27.284000 audit: BPF prog-id=72 op=UNLOAD Jan 15 00:41:27.630281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 00:41:27.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:27.649438 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 00:41:27.805870 kubelet[2808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:41:27.805870 kubelet[2808]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 00:41:27.805870 kubelet[2808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 00:41:27.805870 kubelet[2808]: I0115 00:41:27.805371 2808 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 00:41:27.833258 kubelet[2808]: I0115 00:41:27.833218 2808 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 00:41:27.833393 kubelet[2808]: I0115 00:41:27.833382 2808 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 00:41:27.833932 kubelet[2808]: I0115 00:41:27.833912 2808 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 00:41:27.836182 kubelet[2808]: I0115 00:41:27.836165 2808 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 00:41:27.840990 kubelet[2808]: I0115 00:41:27.840649 2808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 00:41:27.849900 kubelet[2808]: I0115 00:41:27.849880 2808 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 00:41:27.877120 kubelet[2808]: I0115 00:41:27.877085 2808 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 00:41:27.878119 kubelet[2808]: I0115 00:41:27.878080 2808 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 00:41:27.878420 kubelet[2808]: I0115 00:41:27.878211 2808 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 00:41:27.878946 kubelet[2808]: I0115 00:41:27.878930 2808 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 00:41:27.879039 kubelet[2808]: I0115 00:41:27.879025 2808 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 00:41:27.879168 kubelet[2808]: I0115 00:41:27.879154 2808 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:41:27.879683 kubelet[2808]: I0115 00:41:27.879661 2808 kubelet.go:446] "Attempting to sync node with API server" Jan 15 00:41:27.880185 kubelet[2808]: I0115 00:41:27.880168 2808 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 00:41:27.880362 kubelet[2808]: I0115 00:41:27.880347 2808 kubelet.go:352] "Adding apiserver pod source" Jan 15 00:41:27.881460 kubelet[2808]: I0115 00:41:27.880429 2808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 00:41:27.885223 kubelet[2808]: I0115 00:41:27.883444 2808 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 00:41:27.885223 kubelet[2808]: I0115 00:41:27.884227 2808 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 00:41:27.887130 kubelet[2808]: I0115 00:41:27.887108 2808 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 00:41:27.887227 kubelet[2808]: I0115 00:41:27.887212 2808 server.go:1287] "Started kubelet" Jan 15 00:41:27.888173 kubelet[2808]: I0115 00:41:27.888128 2808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 00:41:27.892482 kubelet[2808]: I0115 00:41:27.892098 2808 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 00:41:27.897084 kubelet[2808]: I0115 00:41:27.887472 2808 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 00:41:27.900968 kubelet[2808]: I0115 00:41:27.899692 2808 server.go:479] "Adding debug handlers to kubelet server" Jan 15 00:41:27.903451 kubelet[2808]: I0115 00:41:27.900492 2808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 00:41:27.908002 kubelet[2808]: I0115 00:41:27.907889 2808 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 00:41:27.910485 kubelet[2808]: E0115 00:41:27.910465 2808 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 00:41:27.917813 kubelet[2808]: I0115 00:41:27.917496 2808 factory.go:221] Registration of the systemd container factory successfully Jan 15 00:41:27.918341 kubelet[2808]: I0115 00:41:27.918174 2808 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 00:41:27.924400 kubelet[2808]: I0115 00:41:27.917943 2808 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 00:41:27.931164 kubelet[2808]: I0115 00:41:27.917932 2808 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 00:41:27.935969 kubelet[2808]: I0115 00:41:27.934199 2808 reconciler.go:26] "Reconciler: start to sync state" Jan 15 00:41:27.942286 kubelet[2808]: I0115 00:41:27.941975 2808 factory.go:221] Registration of the containerd container factory successfully Jan 15 00:41:27.942286 kubelet[2808]: E0115 00:41:27.942134 2808 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 00:41:28.017909 kubelet[2808]: I0115 00:41:28.017387 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 00:41:28.035927 kubelet[2808]: I0115 00:41:28.035518 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 00:41:28.040054 kubelet[2808]: I0115 00:41:28.037704 2808 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 00:41:28.040054 kubelet[2808]: I0115 00:41:28.038213 2808 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 00:41:28.040054 kubelet[2808]: I0115 00:41:28.038225 2808 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 00:41:28.040054 kubelet[2808]: E0115 00:41:28.038294 2808 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101157 2808 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101183 2808 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101211 2808 state_mem.go:36] "Initialized new in-memory state store" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101529 2808 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101555 2808 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101687 2808 policy_none.go:49] "None policy: Start" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101702 2808 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.101911 2808 state_mem.go:35] "Initializing new in-memory state store" Jan 15 00:41:28.102901 kubelet[2808]: I0115 00:41:28.102169 2808 state_mem.go:75] "Updated machine memory state" Jan 15 00:41:28.121446 kubelet[2808]: I0115 00:41:28.121371 2808 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 00:41:28.121998 kubelet[2808]: I0115 00:41:28.121698 2808 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 00:41:28.122431 kubelet[2808]: I0115 00:41:28.122080 2808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 00:41:28.123077 kubelet[2808]: I0115 00:41:28.122533 2808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 00:41:28.132077 kubelet[2808]: E0115 00:41:28.131869 2808 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 00:41:28.140570 kubelet[2808]: I0115 00:41:28.140359 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.144934 kubelet[2808]: I0115 00:41:28.144092 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:28.144934 kubelet[2808]: I0115 00:41:28.144284 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:28.240539 kubelet[2808]: I0115 00:41:28.240456 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.240539 kubelet[2808]: I0115 00:41:28.240515 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.240928 kubelet[2808]: I0115 00:41:28.240545 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.240928 kubelet[2808]: I0115 00:41:28.240572 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.240928 kubelet[2808]: I0115 00:41:28.240887 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 00:41:28.240928 kubelet[2808]: I0115 00:41:28.240925 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:28.241012 kubelet[2808]: I0115 00:41:28.240950 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:28.241012 kubelet[2808]: I0115 00:41:28.240973 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:28.241012 kubelet[2808]: I0115 00:41:28.240999 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5f65ae989b3e60bb742bb5216e6ad8d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5f65ae989b3e60bb742bb5216e6ad8d\") " pod="kube-system/kube-apiserver-localhost" Jan 15 00:41:28.274283 kubelet[2808]: I0115 00:41:28.273263 2808 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 00:41:28.316174 kubelet[2808]: I0115 00:41:28.316139 2808 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 00:41:28.316709 kubelet[2808]: I0115 00:41:28.316689 2808 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 00:41:28.486119 kubelet[2808]: E0115 00:41:28.485679 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:28.486119 kubelet[2808]: E0115 00:41:28.486030 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:28.501148 kubelet[2808]: E0115 00:41:28.497508 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:28.895163 kubelet[2808]: I0115 00:41:28.892017 2808 apiserver.go:52] "Watching apiserver" Jan 15 00:41:28.925104 kubelet[2808]: I0115 00:41:28.924901 2808 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 00:41:29.161038 kubelet[2808]: I0115 00:41:29.160465 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:29.164881 kubelet[2808]: E0115 00:41:29.164193 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:29.164881 kubelet[2808]: E0115 00:41:29.164699 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:29.698117 kubelet[2808]: E0115 00:41:29.697394 2808 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 15 00:41:29.699877 kubelet[2808]: E0115 00:41:29.699332 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:29.966552 kubelet[2808]: I0115 00:41:29.963112 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.961702191 podStartE2EDuration="1.961702191s" podCreationTimestamp="2026-01-15 00:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:41:29.961227957 +0000 UTC m=+2.292263361" watchObservedRunningTime="2026-01-15 00:41:29.961702191 +0000 UTC m=+2.292737595" Jan 15 00:41:30.062673 kubelet[2808]: I0115 00:41:30.062379 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.062360412 podStartE2EDuration="2.062360412s" podCreationTimestamp="2026-01-15 00:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:41:30.017537021 +0000 UTC m=+2.348572426" watchObservedRunningTime="2026-01-15 00:41:30.062360412 +0000 UTC m=+2.393395816" Jan 15 00:41:30.166749 kubelet[2808]: E0115 00:41:30.166401 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:30.166749 kubelet[2808]: E0115 00:41:30.166563 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:31.181883 kubelet[2808]: I0115 00:41:31.180155 2808 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 00:41:31.182394 containerd[1637]: time="2026-01-15T00:41:31.182168374Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 00:41:31.187434 kubelet[2808]: I0115 00:41:31.187310 2808 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 00:41:31.695932 kubelet[2808]: E0115 00:41:31.694900 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:31.737919 kubelet[2808]: I0115 00:41:31.736925 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.7369006369999997 podStartE2EDuration="3.736900637s" podCreationTimestamp="2026-01-15 00:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:41:30.063674643 +0000 UTC m=+2.394710067" watchObservedRunningTime="2026-01-15 00:41:31.736900637 +0000 UTC m=+4.067936051" Jan 15 00:41:32.121565 systemd[1]: Created slice kubepods-besteffort-pod546f3cee_47e8_43dc_a99d_3d8a7f47d366.slice - libcontainer container kubepods-besteffort-pod546f3cee_47e8_43dc_a99d_3d8a7f47d366.slice. Jan 15 00:41:32.190510 kubelet[2808]: E0115 00:41:32.189682 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:32.262258 kubelet[2808]: I0115 00:41:32.262013 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/546f3cee-47e8-43dc-a99d-3d8a7f47d366-kube-proxy\") pod \"kube-proxy-zb5b8\" (UID: \"546f3cee-47e8-43dc-a99d-3d8a7f47d366\") " pod="kube-system/kube-proxy-zb5b8" Jan 15 00:41:32.262258 kubelet[2808]: I0115 00:41:32.262181 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/546f3cee-47e8-43dc-a99d-3d8a7f47d366-xtables-lock\") pod \"kube-proxy-zb5b8\" (UID: \"546f3cee-47e8-43dc-a99d-3d8a7f47d366\") " pod="kube-system/kube-proxy-zb5b8" Jan 15 00:41:32.262258 kubelet[2808]: I0115 00:41:32.262201 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsbd\" (UniqueName: \"kubernetes.io/projected/546f3cee-47e8-43dc-a99d-3d8a7f47d366-kube-api-access-fqsbd\") pod \"kube-proxy-zb5b8\" (UID: \"546f3cee-47e8-43dc-a99d-3d8a7f47d366\") " pod="kube-system/kube-proxy-zb5b8" Jan 15 00:41:32.262258 kubelet[2808]: I0115 00:41:32.262259 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/546f3cee-47e8-43dc-a99d-3d8a7f47d366-lib-modules\") pod \"kube-proxy-zb5b8\" (UID: \"546f3cee-47e8-43dc-a99d-3d8a7f47d366\") " pod="kube-system/kube-proxy-zb5b8" Jan 15 00:41:32.447963 kubelet[2808]: E0115 00:41:32.447348 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:32.452273 systemd[1]: Created slice kubepods-besteffort-podbbdf77d1_60df_4b27_bee0_52b6a011bc5e.slice - libcontainer container kubepods-besteffort-podbbdf77d1_60df_4b27_bee0_52b6a011bc5e.slice. Jan 15 00:41:32.456566 containerd[1637]: time="2026-01-15T00:41:32.456385511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb5b8,Uid:546f3cee-47e8-43dc-a99d-3d8a7f47d366,Namespace:kube-system,Attempt:0,}" Jan 15 00:41:32.477680 kubelet[2808]: I0115 00:41:32.475964 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd6c\" (UniqueName: \"kubernetes.io/projected/bbdf77d1-60df-4b27-bee0-52b6a011bc5e-kube-api-access-2gd6c\") pod \"tigera-operator-7dcd859c48-s9sjc\" (UID: \"bbdf77d1-60df-4b27-bee0-52b6a011bc5e\") " pod="tigera-operator/tigera-operator-7dcd859c48-s9sjc" Jan 15 00:41:32.477680 kubelet[2808]: I0115 00:41:32.476010 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbdf77d1-60df-4b27-bee0-52b6a011bc5e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-s9sjc\" (UID: \"bbdf77d1-60df-4b27-bee0-52b6a011bc5e\") " pod="tigera-operator/tigera-operator-7dcd859c48-s9sjc" Jan 15 00:41:32.619060 containerd[1637]: time="2026-01-15T00:41:32.617686509Z" level=info msg="connecting to shim 765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4" address="unix:///run/containerd/s/bbc3539055238905b00362c095431f7a22505a2e955c8aa789426e7ed2238c08" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:32.777138 containerd[1637]: time="2026-01-15T00:41:32.775464097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-s9sjc,Uid:bbdf77d1-60df-4b27-bee0-52b6a011bc5e,Namespace:tigera-operator,Attempt:0,}" Jan 15 00:41:32.947136 containerd[1637]: time="2026-01-15T00:41:32.945104689Z" level=info msg="connecting to shim 7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf" address="unix:///run/containerd/s/1659a721f166bd292db6abca02e05d05ef084155dc6537427057faa6c0dff4db" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:32.945281 systemd[1]: Started cri-containerd-765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4.scope - libcontainer container 765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4. Jan 15 00:41:33.013000 audit: BPF prog-id=131 op=LOAD Jan 15 00:41:33.021077 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 00:41:33.021169 kernel: audit: type=1334 audit(1768437693.013:429): prog-id=131 op=LOAD Jan 15 00:41:33.016000 audit: BPF prog-id=132 op=LOAD Jan 15 00:41:33.037700 kernel: audit: type=1334 audit(1768437693.016:430): prog-id=132 op=LOAD Jan 15 00:41:33.016000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.066230 kernel: audit: type=1300 audit(1768437693.016:430): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.096094 kernel: audit: type=1327 audit(1768437693.016:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.096196 kernel: audit: type=1334 audit(1768437693.016:431): prog-id=132 op=UNLOAD Jan 15 00:41:33.016000 audit: BPF prog-id=132 op=UNLOAD Jan 15 00:41:33.103034 kernel: audit: type=1300 audit(1768437693.016:431): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.016000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.169205 kernel: audit: type=1327 audit(1768437693.016:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.017000 audit: BPF prog-id=133 op=LOAD Jan 15 00:41:33.177047 kernel: audit: type=1334 audit(1768437693.017:432): prog-id=133 op=LOAD Jan 15 00:41:33.017000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.211105 kernel: audit: type=1300 audit(1768437693.017:432): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.211216 kernel: audit: type=1327 audit(1768437693.017:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.212367 systemd[1]: Started cri-containerd-7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf.scope - libcontainer container 7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf. Jan 15 00:41:33.225162 containerd[1637]: time="2026-01-15T00:41:33.223187082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb5b8,Uid:546f3cee-47e8-43dc-a99d-3d8a7f47d366,Namespace:kube-system,Attempt:0,} returns sandbox id \"765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4\"" Jan 15 00:41:33.226665 kubelet[2808]: E0115 00:41:33.225379 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:33.237905 containerd[1637]: time="2026-01-15T00:41:33.233371812Z" level=info msg="CreateContainer within sandbox \"765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 00:41:33.018000 audit: BPF prog-id=134 op=LOAD Jan 15 00:41:33.018000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.018000 audit: BPF prog-id=134 op=UNLOAD Jan 15 00:41:33.018000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.018000 audit: BPF prog-id=133 op=UNLOAD Jan 15 00:41:33.018000 audit[2881]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.018000 audit: BPF prog-id=135 op=LOAD Jan 15 00:41:33.018000 audit[2881]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2869 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736356263323039633862613663623232326439653063316333373462 Jan 15 00:41:33.264000 audit: BPF prog-id=136 op=LOAD Jan 15 00:41:33.267000 audit: BPF prog-id=137 op=LOAD Jan 15 00:41:33.267000 audit[2922]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.267000 audit: BPF prog-id=137 op=UNLOAD Jan 15 00:41:33.267000 audit[2922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.268000 audit: BPF prog-id=138 op=LOAD Jan 15 00:41:33.268000 audit[2922]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.268000 audit: BPF prog-id=139 op=LOAD Jan 15 00:41:33.268000 audit[2922]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.269000 audit: BPF prog-id=139 op=UNLOAD Jan 15 00:41:33.269000 audit[2922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.269000 audit: BPF prog-id=138 op=UNLOAD Jan 15 00:41:33.269000 audit[2922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.269000 audit: BPF prog-id=140 op=LOAD Jan 15 00:41:33.269000 audit[2922]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2904 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761393761303965623139363237303033363838653032373733383530 Jan 15 00:41:33.278325 containerd[1637]: time="2026-01-15T00:41:33.278184483Z" level=info msg="Container f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:33.316139 containerd[1637]: time="2026-01-15T00:41:33.315975612Z" level=info msg="CreateContainer within sandbox \"765bc209c8ba6cb222d9e0c1c374b3b9c2a98159dc78c95b0affe2206b0582e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310\"" Jan 15 00:41:33.320878 containerd[1637]: time="2026-01-15T00:41:33.320566190Z" level=info msg="StartContainer for \"f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310\"" Jan 15 00:41:33.335962 containerd[1637]: time="2026-01-15T00:41:33.335694948Z" level=info msg="connecting to shim f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310" address="unix:///run/containerd/s/bbc3539055238905b00362c095431f7a22505a2e955c8aa789426e7ed2238c08" protocol=ttrpc version=3 Jan 15 00:41:33.438379 containerd[1637]: time="2026-01-15T00:41:33.438323270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-s9sjc,Uid:bbdf77d1-60df-4b27-bee0-52b6a011bc5e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf\"" Jan 15 00:41:33.438346 systemd[1]: Started cri-containerd-f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310.scope - libcontainer container f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310. Jan 15 00:41:33.444213 containerd[1637]: time="2026-01-15T00:41:33.443887369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 00:41:33.585000 audit: BPF prog-id=141 op=LOAD Jan 15 00:41:33.585000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2869 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635643732366361653861333234643761336337656135323862323063 Jan 15 00:41:33.585000 audit: BPF prog-id=142 op=LOAD Jan 15 00:41:33.585000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2869 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635643732366361653861333234643761336337656135323862323063 Jan 15 00:41:33.585000 audit: BPF prog-id=142 op=UNLOAD Jan 15 00:41:33.585000 audit[2949]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635643732366361653861333234643761336337656135323862323063 Jan 15 00:41:33.585000 audit: BPF prog-id=141 op=UNLOAD Jan 15 00:41:33.585000 audit[2949]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2869 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635643732366361653861333234643761336337656135323862323063 Jan 15 00:41:33.585000 audit: BPF prog-id=143 op=LOAD Jan 15 00:41:33.585000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2869 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:33.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635643732366361653861333234643761336337656135323862323063 Jan 15 00:41:33.683359 containerd[1637]: time="2026-01-15T00:41:33.683164379Z" level=info msg="StartContainer for \"f5d726cae8a324d7a3c7ea528b20c08ad66677c5c0f35fe3ed5c1ce8fb69d310\" returns successfully" Jan 15 00:41:33.919517 kubelet[2808]: E0115 00:41:33.915526 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:34.207385 kubelet[2808]: E0115 00:41:34.207045 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:34.210167 kubelet[2808]: E0115 00:41:34.208336 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:34.305000 audit[3020]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.305000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe010c7730 a2=0 a3=7ffe010c771c items=0 ppid=2968 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:41:34.316184 kubelet[2808]: I0115 00:41:34.307466 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zb5b8" podStartSLOduration=2.307450609 podStartE2EDuration="2.307450609s" podCreationTimestamp="2026-01-15 00:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:41:34.261048875 +0000 UTC m=+6.592084289" watchObservedRunningTime="2026-01-15 00:41:34.307450609 +0000 UTC m=+6.638486013" Jan 15 00:41:34.327000 audit[3022]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.327000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbdcbcf40 a2=0 a3=7ffcbdcbcf2c items=0 ppid=2968 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:41:34.327000 audit[3021]: NETFILTER_CFG table=mangle:56 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.327000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6bc503b0 a2=0 a3=7ffd6bc5039c items=0 ppid=2968 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 00:41:34.341000 audit[3024]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.341000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdde5921a0 a2=0 a3=7ffdde59218c items=0 ppid=2968 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:41:34.348000 audit[3025]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.348000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc75cd7f20 a2=0 a3=7ffc75cd7f0c items=0 ppid=2968 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 00:41:34.361000 audit[3029]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.361000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff06d806d0 a2=0 a3=7fff06d806bc items=0 ppid=2968 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 00:41:34.442000 audit[3030]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.442000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdb0275f90 a2=0 a3=7ffdb0275f7c items=0 ppid=2968 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:41:34.457000 audit[3032]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.457000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd604c4410 a2=0 a3=7ffd604c43fc items=0 ppid=2968 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.457000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 15 00:41:34.480000 audit[3035]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.480000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce25c7be0 a2=0 a3=7ffce25c7bcc items=0 ppid=2968 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 15 00:41:34.488000 audit[3036]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.488000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce2ff8600 a2=0 a3=7ffce2ff85ec items=0 ppid=2968 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:41:34.501000 audit[3038]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.501000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffeca6e040 a2=0 a3=7fffeca6e02c items=0 ppid=2968 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:41:34.507000 audit[3039]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.507000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd64d90810 a2=0 a3=7ffd64d907fc items=0 ppid=2968 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:41:34.521000 audit[3041]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.521000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdda81f70 a2=0 a3=7ffcdda81f5c items=0 ppid=2968 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:41:34.541000 audit[3044]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.541000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4d794350 a2=0 a3=7ffd4d79433c items=0 ppid=2968 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 15 00:41:34.548000 audit[3045]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.548000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd8b862a0 a2=0 a3=7ffcd8b8628c items=0 ppid=2968 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:41:34.564000 audit[3047]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.564000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd19d14a0 a2=0 a3=7ffdd19d148c items=0 ppid=2968 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:41:34.571000 audit[3048]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.571000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd92725800 a2=0 a3=7ffd927257ec items=0 ppid=2968 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:41:34.588000 audit[3050]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.588000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4d0d7fe0 a2=0 a3=7ffc4d0d7fcc items=0 ppid=2968 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:41:34.608000 audit[3053]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.608000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff28d13910 a2=0 a3=7fff28d138fc items=0 ppid=2968 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:41:34.630000 audit[3056]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.630000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfccab220 a2=0 a3=7ffcfccab20c items=0 ppid=2968 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:41:34.637000 audit[3057]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.637000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc593bcaf0 a2=0 a3=7ffc593bcadc items=0 ppid=2968 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:41:34.652000 audit[3059]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.652000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcdb09bb50 a2=0 a3=7ffcdb09bb3c items=0 ppid=2968 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:41:34.680000 audit[3062]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.680000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb6cb3130 a2=0 a3=7ffcb6cb311c items=0 ppid=2968 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:41:34.696000 audit[3064]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.696000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce76cd620 a2=0 a3=7ffce76cd60c items=0 ppid=2968 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:41:34.733000 audit[3069]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 00:41:34.733000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd2d4b21b0 a2=0 a3=7ffd2d4b219c items=0 ppid=2968 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:41:34.845040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18859048.mount: Deactivated successfully. Jan 15 00:41:34.885000 audit[3075]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:34.885000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe83e24730 a2=0 a3=7ffe83e2471c items=0 ppid=2968 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:34.902000 audit[3075]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:34.902000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe83e24730 a2=0 a3=7ffe83e2471c items=0 ppid=2968 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:34.909000 audit[3080]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.909000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd7104220 a2=0 a3=7ffdd710420c items=0 ppid=2968 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 00:41:34.931000 audit[3082]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.931000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd74d6a0d0 a2=0 a3=7ffd74d6a0bc items=0 ppid=2968 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 15 00:41:34.955000 audit[3085]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.955000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe5196d530 a2=0 a3=7ffe5196d51c items=0 ppid=2968 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 15 00:41:34.963000 audit[3086]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.963000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1c243470 a2=0 a3=7ffc1c24345c items=0 ppid=2968 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 00:41:34.978000 audit[3088]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.978000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3d317640 a2=0 a3=7fff3d31762c items=0 ppid=2968 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 00:41:34.989000 audit[3089]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:34.989000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbd257b80 a2=0 a3=7ffdbd257b6c items=0 ppid=2968 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:34.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 00:41:35.005000 audit[3091]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.005000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdfd9ee420 a2=0 a3=7ffdfd9ee40c items=0 ppid=2968 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 15 00:41:35.030000 audit[3098]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.030000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffd74ecc60 a2=0 a3=7fffd74ecc4c items=0 ppid=2968 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 00:41:35.037000 audit[3099]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.037000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff594ce970 a2=0 a3=7fff594ce95c items=0 ppid=2968 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 00:41:35.053000 audit[3101]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.053000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4203c1b0 a2=0 a3=7fff4203c19c items=0 ppid=2968 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 00:41:35.064000 audit[3102]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.064000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb9c81bc0 a2=0 a3=7ffcb9c81bac items=0 ppid=2968 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 00:41:35.083000 audit[3104]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.083000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff35a770e0 a2=0 a3=7fff35a770cc items=0 ppid=2968 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 00:41:35.105000 audit[3107]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.105000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec4da57e0 a2=0 a3=7ffec4da57cc items=0 ppid=2968 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 00:41:35.135000 audit[3110]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.135000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9fc25330 a2=0 a3=7ffc9fc2531c items=0 ppid=2968 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 15 00:41:35.143000 audit[3111]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.143000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffde612dde0 a2=0 a3=7ffde612ddcc items=0 ppid=2968 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 00:41:35.158000 audit[3113]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.158000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd84c8b880 a2=0 a3=7ffd84c8b86c items=0 ppid=2968 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:41:35.186000 audit[3116]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.186000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdbb1a060 a2=0 a3=7fffdbb1a04c items=0 ppid=2968 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 00:41:35.195000 audit[3117]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.195000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf11e70d0 a2=0 a3=7ffdf11e70bc items=0 ppid=2968 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 00:41:35.220000 audit[3119]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.220000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd40bde250 a2=0 a3=7ffd40bde23c items=0 ppid=2968 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 00:41:35.226862 kubelet[2808]: E0115 00:41:35.225997 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:35.235000 audit[3120]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.235000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03704350 a2=0 a3=7fff0370433c items=0 ppid=2968 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 00:41:35.259000 audit[3122]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.259000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcd9d73170 a2=0 a3=7ffcd9d7315c items=0 ppid=2968 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:41:35.280000 audit[3125]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 00:41:35.280000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc96c51c10 a2=0 a3=7ffc96c51bfc items=0 ppid=2968 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 00:41:35.303000 audit[3127]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:41:35.303000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe9b0583d0 a2=0 a3=7ffe9b0583bc items=0 ppid=2968 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.303000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:35.305000 audit[3127]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 00:41:35.305000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe9b0583d0 a2=0 a3=7ffe9b0583bc items=0 ppid=2968 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:35.305000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:36.857544 containerd[1637]: time="2026-01-15T00:41:36.857419599Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:36.859382 containerd[1637]: time="2026-01-15T00:41:36.859206886Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 15 00:41:36.861315 containerd[1637]: time="2026-01-15T00:41:36.861255726Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:36.866269 containerd[1637]: time="2026-01-15T00:41:36.866226557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:36.866949 containerd[1637]: time="2026-01-15T00:41:36.866911467Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.422984455s" Jan 15 00:41:36.869519 containerd[1637]: time="2026-01-15T00:41:36.867293371Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 00:41:36.873504 containerd[1637]: time="2026-01-15T00:41:36.873463203Z" level=info msg="CreateContainer within sandbox \"7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 00:41:36.911866 containerd[1637]: time="2026-01-15T00:41:36.910113879Z" level=info msg="Container 3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:36.927295 containerd[1637]: time="2026-01-15T00:41:36.927083912Z" level=info msg="CreateContainer within sandbox \"7a97a09eb19627003688e0277385090fcc73bc122ba29f9f0f628ea716556aaf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435\"" Jan 15 00:41:36.930934 containerd[1637]: time="2026-01-15T00:41:36.928706790Z" level=info msg="StartContainer for \"3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435\"" Jan 15 00:41:36.930934 containerd[1637]: time="2026-01-15T00:41:36.930066066Z" level=info msg="connecting to shim 3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435" address="unix:///run/containerd/s/1659a721f166bd292db6abca02e05d05ef084155dc6537427057faa6c0dff4db" protocol=ttrpc version=3 Jan 15 00:41:36.974571 systemd[1]: Started cri-containerd-3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435.scope - libcontainer container 3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435. Jan 15 00:41:37.005000 audit: BPF prog-id=144 op=LOAD Jan 15 00:41:37.007000 audit: BPF prog-id=145 op=LOAD Jan 15 00:41:37.007000 audit[3128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.007000 audit: BPF prog-id=145 op=UNLOAD Jan 15 00:41:37.007000 audit[3128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.008000 audit: BPF prog-id=146 op=LOAD Jan 15 00:41:37.008000 audit[3128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.008000 audit: BPF prog-id=147 op=LOAD Jan 15 00:41:37.008000 audit[3128]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.008000 audit: BPF prog-id=147 op=UNLOAD Jan 15 00:41:37.008000 audit[3128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.008000 audit: BPF prog-id=146 op=UNLOAD Jan 15 00:41:37.008000 audit[3128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.008000 audit: BPF prog-id=148 op=LOAD Jan 15 00:41:37.008000 audit[3128]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2904 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:37.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333962386632393039623637306162323831333431343731373962 Jan 15 00:41:37.072218 containerd[1637]: time="2026-01-15T00:41:37.072157333Z" level=info msg="StartContainer for \"3939b8f2909b670ab28134147179b1e8aaf40e407e49b28ff5a90778b0ce3435\" returns successfully" Jan 15 00:41:38.067319 kubelet[2808]: E0115 00:41:38.067223 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:38.105690 kubelet[2808]: I0115 00:41:38.105064 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-s9sjc" podStartSLOduration=2.676514877 podStartE2EDuration="6.105045329s" podCreationTimestamp="2026-01-15 00:41:32 +0000 UTC" firstStartedPulling="2026-01-15 00:41:33.441561191 +0000 UTC m=+5.772596595" lastFinishedPulling="2026-01-15 00:41:36.870091644 +0000 UTC m=+9.201127047" observedRunningTime="2026-01-15 00:41:37.261289659 +0000 UTC m=+9.592325083" watchObservedRunningTime="2026-01-15 00:41:38.105045329 +0000 UTC m=+10.436080733" Jan 15 00:41:38.240103 kubelet[2808]: E0115 00:41:38.239916 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:45.476929 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 15 00:41:45.499922 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 15 00:41:45.500108 kernel: audit: type=1106 audit(1768437705.476:509): pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:41:45.476000 audit[1844]: USER_END pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:41:45.500265 sshd[1843]: Connection closed by 10.0.0.1 port 50386 Jan 15 00:41:45.517287 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Jan 15 00:41:45.477000 audit[1844]: CRED_DISP pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:41:45.536067 systemd-logind[1618]: Session 7 logged out. Waiting for processes to exit. Jan 15 00:41:45.538530 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:50386.service: Deactivated successfully. Jan 15 00:41:45.547055 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 00:41:45.547894 systemd[1]: session-7.scope: Consumed 9.234s CPU time, 218.3M memory peak. Jan 15 00:41:45.523000 audit[1840]: USER_END pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:41:45.557887 systemd-logind[1618]: Removed session 7. Jan 15 00:41:45.595333 kernel: audit: type=1104 audit(1768437705.477:510): pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 00:41:45.595454 kernel: audit: type=1106 audit(1768437705.523:511): pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:41:45.524000 audit[1840]: CRED_DISP pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:41:45.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:50386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:45.667159 kernel: audit: type=1104 audit(1768437705.524:512): pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:41:45.667276 kernel: audit: type=1131 audit(1768437705.540:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:50386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:41:46.353000 audit[3226]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.377889 kernel: audit: type=1325 audit(1768437706.353:514): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.353000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd56824490 a2=0 a3=7ffd5682447c items=0 ppid=2968 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.448304 kernel: audit: type=1300 audit(1768437706.353:514): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd56824490 a2=0 a3=7ffd5682447c items=0 ppid=2968 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.448430 kernel: audit: type=1327 audit(1768437706.353:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:46.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:46.380000 audit[3226]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.469000 kernel: audit: type=1325 audit(1768437706.380:515): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.380000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd56824490 a2=0 a3=0 items=0 ppid=2968 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.511095 kernel: audit: type=1300 audit(1768437706.380:515): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd56824490 a2=0 a3=0 items=0 ppid=2968 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:46.433000 audit[3228]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.433000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc951ee780 a2=0 a3=7ffc951ee76c items=0 ppid=2968 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:46.516000 audit[3228]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:46.516000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc951ee780 a2=0 a3=0 items=0 ppid=2968 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:46.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:50.961946 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 00:41:50.962075 kernel: audit: type=1325 audit(1768437710.950:518): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:50.950000 audit[3230]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:50.950000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6d7b3d80 a2=0 a3=7ffc6d7b3d6c items=0 ppid=2968 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.024347 kernel: audit: type=1300 audit(1768437710.950:518): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6d7b3d80 a2=0 a3=7ffc6d7b3d6c items=0 ppid=2968 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.025944 kernel: audit: type=1327 audit(1768437710.950:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:50.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:51.046087 kernel: audit: type=1325 audit(1768437711.027:519): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.027000 audit[3230]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.068342 kernel: audit: type=1300 audit(1768437711.027:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6d7b3d80 a2=0 a3=0 items=0 ppid=2968 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.027000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6d7b3d80 a2=0 a3=0 items=0 ppid=2968 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.113082 kernel: audit: type=1327 audit(1768437711.027:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:51.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:51.190000 audit[3232]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.190000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd2f92e950 a2=0 a3=7ffd2f92e93c items=0 ppid=2968 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.260900 kernel: audit: type=1325 audit(1768437711.190:520): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.260983 kernel: audit: type=1300 audit(1768437711.190:520): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd2f92e950 a2=0 a3=7ffd2f92e93c items=0 ppid=2968 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.261016 kernel: audit: type=1327 audit(1768437711.190:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:51.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:51.284000 audit[3232]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.309063 kernel: audit: type=1325 audit(1768437711.284:521): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:51.284000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2f92e950 a2=0 a3=0 items=0 ppid=2968 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:51.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:52.342000 audit[3234]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:52.342000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4336bc10 a2=0 a3=7ffc4336bbfc items=0 ppid=2968 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:52.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:52.358000 audit[3234]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:52.358000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc4336bc10 a2=0 a3=0 items=0 ppid=2968 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:52.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:54.093000 audit[3236]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:54.093000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffccf6e8340 a2=0 a3=7ffccf6e832c items=0 ppid=2968 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:54.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:54.099000 audit[3236]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:54.099000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccf6e8340 a2=0 a3=0 items=0 ppid=2968 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:54.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:54.149000 audit[3238]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:54.149000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffee4ddb190 a2=0 a3=7ffee4ddb17c items=0 ppid=2968 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:54.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:54.157000 audit[3238]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:54.157000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee4ddb190 a2=0 a3=0 items=0 ppid=2968 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:54.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:54.837949 systemd[1]: Created slice kubepods-besteffort-pod1166dcff_ef3a_4acc_ac62_384abf9db327.slice - libcontainer container kubepods-besteffort-pod1166dcff_ef3a_4acc_ac62_384abf9db327.slice. Jan 15 00:41:54.870457 kubelet[2808]: I0115 00:41:54.870160 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvh2\" (UniqueName: \"kubernetes.io/projected/1166dcff-ef3a-4acc-ac62-384abf9db327-kube-api-access-hgvh2\") pod \"calico-typha-f7468cdff-8n8n8\" (UID: \"1166dcff-ef3a-4acc-ac62-384abf9db327\") " pod="calico-system/calico-typha-f7468cdff-8n8n8" Jan 15 00:41:54.870457 kubelet[2808]: I0115 00:41:54.870298 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1166dcff-ef3a-4acc-ac62-384abf9db327-tigera-ca-bundle\") pod \"calico-typha-f7468cdff-8n8n8\" (UID: \"1166dcff-ef3a-4acc-ac62-384abf9db327\") " pod="calico-system/calico-typha-f7468cdff-8n8n8" Jan 15 00:41:54.870457 kubelet[2808]: I0115 00:41:54.870318 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1166dcff-ef3a-4acc-ac62-384abf9db327-typha-certs\") pod \"calico-typha-f7468cdff-8n8n8\" (UID: \"1166dcff-ef3a-4acc-ac62-384abf9db327\") " pod="calico-system/calico-typha-f7468cdff-8n8n8" Jan 15 00:41:55.179579 kubelet[2808]: E0115 00:41:55.179188 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:55.183181 containerd[1637]: time="2026-01-15T00:41:55.181465322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7468cdff-8n8n8,Uid:1166dcff-ef3a-4acc-ac62-384abf9db327,Namespace:calico-system,Attempt:0,}" Jan 15 00:41:55.250000 audit[3242]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:55.250000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdf64001a0 a2=0 a3=7ffdf640018c items=0 ppid=2968 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:55.273000 audit[3242]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:41:55.273000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf64001a0 a2=0 a3=0 items=0 ppid=2968 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:41:55.282027 kubelet[2808]: I0115 00:41:55.279489 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-cni-log-dir\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282027 kubelet[2808]: I0115 00:41:55.279516 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/be73e5eb-6296-4ba8-bd88-38c1e283ad81-node-certs\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282027 kubelet[2808]: I0115 00:41:55.279530 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-policysync\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282027 kubelet[2808]: I0115 00:41:55.279543 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-var-run-calico\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282027 kubelet[2808]: I0115 00:41:55.279557 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-lib-modules\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282239 kubelet[2808]: I0115 00:41:55.279569 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be73e5eb-6296-4ba8-bd88-38c1e283ad81-tigera-ca-bundle\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.282239 kubelet[2808]: I0115 00:41:55.279582 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-var-lib-calico\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.295112 kubelet[2808]: I0115 00:41:55.285111 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-xtables-lock\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.295112 kubelet[2808]: I0115 00:41:55.285241 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-cni-bin-dir\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.295112 kubelet[2808]: I0115 00:41:55.285259 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-cni-net-dir\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.295112 kubelet[2808]: I0115 00:41:55.285272 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvc5\" (UniqueName: \"kubernetes.io/projected/be73e5eb-6296-4ba8-bd88-38c1e283ad81-kube-api-access-kqvc5\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.295112 kubelet[2808]: I0115 00:41:55.285292 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/be73e5eb-6296-4ba8-bd88-38c1e283ad81-flexvol-driver-host\") pod \"calico-node-s9mwn\" (UID: \"be73e5eb-6296-4ba8-bd88-38c1e283ad81\") " pod="calico-system/calico-node-s9mwn" Jan 15 00:41:55.286270 systemd[1]: Created slice kubepods-besteffort-podbe73e5eb_6296_4ba8_bd88_38c1e283ad81.slice - libcontainer container kubepods-besteffort-podbe73e5eb_6296_4ba8_bd88_38c1e283ad81.slice. Jan 15 00:41:55.401119 kubelet[2808]: E0115 00:41:55.401059 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.401269 kubelet[2808]: W0115 00:41:55.401252 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.401365 kubelet[2808]: E0115 00:41:55.401351 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.407524 kubelet[2808]: E0115 00:41:55.407276 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.407524 kubelet[2808]: W0115 00:41:55.407294 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.407524 kubelet[2808]: E0115 00:41:55.407310 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.412896 kubelet[2808]: E0115 00:41:55.411252 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.412982 kubelet[2808]: W0115 00:41:55.412964 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.413053 kubelet[2808]: E0115 00:41:55.413038 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.440298 kubelet[2808]: E0115 00:41:55.439559 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.440298 kubelet[2808]: W0115 00:41:55.439575 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.440298 kubelet[2808]: E0115 00:41:55.439971 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.444963 kubelet[2808]: E0115 00:41:55.441432 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.444963 kubelet[2808]: W0115 00:41:55.441445 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.444963 kubelet[2808]: E0115 00:41:55.441456 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.476059 containerd[1637]: time="2026-01-15T00:41:55.473322314Z" level=info msg="connecting to shim cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad" address="unix:///run/containerd/s/f9b126773f3e2f67efbf114d39ab70e05ee13e4fa15027c488fc61e8183fb228" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:55.528473 kubelet[2808]: E0115 00:41:55.528348 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.528473 kubelet[2808]: W0115 00:41:55.528376 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.528473 kubelet[2808]: E0115 00:41:55.528400 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.608237 kubelet[2808]: E0115 00:41:55.606472 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:55.608978 kubelet[2808]: E0115 00:41:55.608935 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:41:55.613490 containerd[1637]: time="2026-01-15T00:41:55.613022665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9mwn,Uid:be73e5eb-6296-4ba8-bd88-38c1e283ad81,Namespace:calico-system,Attempt:0,}" Jan 15 00:41:55.686968 kubelet[2808]: E0115 00:41:55.685553 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.686968 kubelet[2808]: W0115 00:41:55.685578 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.688875 systemd[1]: Started cri-containerd-cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad.scope - libcontainer container cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad. Jan 15 00:41:55.689330 kubelet[2808]: E0115 00:41:55.689147 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.690102 kubelet[2808]: E0115 00:41:55.690083 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.690189 kubelet[2808]: W0115 00:41:55.690170 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.690265 kubelet[2808]: E0115 00:41:55.690252 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.693544 kubelet[2808]: E0115 00:41:55.691585 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.693544 kubelet[2808]: W0115 00:41:55.693087 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.694261 kubelet[2808]: E0115 00:41:55.693994 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.696518 kubelet[2808]: E0115 00:41:55.696151 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.696518 kubelet[2808]: W0115 00:41:55.696168 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.696518 kubelet[2808]: E0115 00:41:55.696187 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.706025 kubelet[2808]: E0115 00:41:55.705432 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.706025 kubelet[2808]: W0115 00:41:55.705450 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.707037 kubelet[2808]: E0115 00:41:55.706893 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.707309 kubelet[2808]: E0115 00:41:55.707293 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.707393 kubelet[2808]: W0115 00:41:55.707379 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.707463 kubelet[2808]: E0115 00:41:55.707449 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.708709 kubelet[2808]: E0115 00:41:55.708296 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.708709 kubelet[2808]: W0115 00:41:55.708313 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.708709 kubelet[2808]: E0115 00:41:55.708326 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.712317 kubelet[2808]: E0115 00:41:55.712297 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.713092 kubelet[2808]: W0115 00:41:55.712401 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.713092 kubelet[2808]: E0115 00:41:55.712425 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.714392 kubelet[2808]: E0115 00:41:55.714374 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.714482 kubelet[2808]: W0115 00:41:55.714468 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.714562 kubelet[2808]: E0115 00:41:55.714545 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.716179 kubelet[2808]: E0115 00:41:55.716162 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.716263 kubelet[2808]: W0115 00:41:55.716247 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.716336 kubelet[2808]: E0115 00:41:55.716321 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.720165 kubelet[2808]: E0115 00:41:55.718235 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.720263 kubelet[2808]: W0115 00:41:55.720244 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.721033 kubelet[2808]: E0115 00:41:55.721013 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.721249 kubelet[2808]: I0115 00:41:55.721226 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5de5824a-09ed-431d-8ba6-dbc85139b40f-registration-dir\") pod \"csi-node-driver-q7lg4\" (UID: \"5de5824a-09ed-431d-8ba6-dbc85139b40f\") " pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:41:55.724289 kubelet[2808]: E0115 00:41:55.724270 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.724379 kubelet[2808]: W0115 00:41:55.724363 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.725572 kubelet[2808]: E0115 00:41:55.725085 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.729102 kubelet[2808]: I0115 00:41:55.728538 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5de5824a-09ed-431d-8ba6-dbc85139b40f-kubelet-dir\") pod \"csi-node-driver-q7lg4\" (UID: \"5de5824a-09ed-431d-8ba6-dbc85139b40f\") " pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:41:55.729102 kubelet[2808]: E0115 00:41:55.729026 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.729102 kubelet[2808]: W0115 00:41:55.729038 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.729479 kubelet[2808]: E0115 00:41:55.729258 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.735436 kubelet[2808]: E0115 00:41:55.731095 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.735436 kubelet[2808]: W0115 00:41:55.731111 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.737974 kubelet[2808]: E0115 00:41:55.737934 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.738466 kubelet[2808]: E0115 00:41:55.738446 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.741216 kubelet[2808]: W0115 00:41:55.741193 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.741392 kubelet[2808]: E0115 00:41:55.741375 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.746577 kubelet[2808]: E0115 00:41:55.744953 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.746577 kubelet[2808]: W0115 00:41:55.744966 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.746577 kubelet[2808]: E0115 00:41:55.745018 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.748883 kubelet[2808]: E0115 00:41:55.748867 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.748978 kubelet[2808]: W0115 00:41:55.748954 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.750031 kubelet[2808]: E0115 00:41:55.749931 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.753088 kubelet[2808]: E0115 00:41:55.753071 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.753180 kubelet[2808]: W0115 00:41:55.753166 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.753254 kubelet[2808]: E0115 00:41:55.753240 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.754538 kubelet[2808]: E0115 00:41:55.754158 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.754538 kubelet[2808]: W0115 00:41:55.754172 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.755449 kubelet[2808]: E0115 00:41:55.755355 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.760137 kubelet[2808]: E0115 00:41:55.758456 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.761010 kubelet[2808]: W0115 00:41:55.760581 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.761010 kubelet[2808]: E0115 00:41:55.760896 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.761233 kubelet[2808]: E0115 00:41:55.761220 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.761284 kubelet[2808]: W0115 00:41:55.761273 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.761329 kubelet[2808]: E0115 00:41:55.761318 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.766391 kubelet[2808]: E0115 00:41:55.764224 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.766391 kubelet[2808]: W0115 00:41:55.764239 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.766391 kubelet[2808]: E0115 00:41:55.764357 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.767038 kubelet[2808]: E0115 00:41:55.767023 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.767102 kubelet[2808]: W0115 00:41:55.767091 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.767281 kubelet[2808]: E0115 00:41:55.767266 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.769433 kubelet[2808]: E0115 00:41:55.769416 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.769510 kubelet[2808]: W0115 00:41:55.769497 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.769589 kubelet[2808]: E0115 00:41:55.769574 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.771342 kubelet[2808]: E0115 00:41:55.771328 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.771416 kubelet[2808]: W0115 00:41:55.771404 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.771462 kubelet[2808]: E0115 00:41:55.771452 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.773170 kubelet[2808]: E0115 00:41:55.773073 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.773170 kubelet[2808]: W0115 00:41:55.773087 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.773170 kubelet[2808]: E0115 00:41:55.773100 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.781072 containerd[1637]: time="2026-01-15T00:41:55.780514876Z" level=info msg="connecting to shim 82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32" address="unix:///run/containerd/s/f01d80b87fd34682a394dec687a2ab7b3228f3ae7648ab5e25ea943e06c47fa9" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:41:55.793000 audit: BPF prog-id=149 op=LOAD Jan 15 00:41:55.795000 audit: BPF prog-id=150 op=LOAD Jan 15 00:41:55.795000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6238 a2=98 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.796000 audit: BPF prog-id=150 op=UNLOAD Jan 15 00:41:55.796000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.797000 audit: BPF prog-id=151 op=LOAD Jan 15 00:41:55.797000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6488 a2=98 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.798000 audit: BPF prog-id=152 op=LOAD Jan 15 00:41:55.798000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001d6218 a2=98 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.798000 audit: BPF prog-id=152 op=UNLOAD Jan 15 00:41:55.798000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.800000 audit: BPF prog-id=151 op=UNLOAD Jan 15 00:41:55.800000 audit[3283]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.800000 audit: BPF prog-id=153 op=LOAD Jan 15 00:41:55.800000 audit[3283]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d66e8 a2=98 a3=0 items=0 ppid=3257 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:55.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364386438366236326263356230636539623261663231653437363838 Jan 15 00:41:55.832056 kubelet[2808]: E0115 00:41:55.832025 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.832271 kubelet[2808]: W0115 00:41:55.832248 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.832361 kubelet[2808]: E0115 00:41:55.832343 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.835164 kubelet[2808]: I0115 00:41:55.834881 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsc7f\" (UniqueName: \"kubernetes.io/projected/5de5824a-09ed-431d-8ba6-dbc85139b40f-kube-api-access-hsc7f\") pod \"csi-node-driver-q7lg4\" (UID: \"5de5824a-09ed-431d-8ba6-dbc85139b40f\") " pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:41:55.836064 kubelet[2808]: E0115 00:41:55.836042 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.836162 kubelet[2808]: W0115 00:41:55.836144 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.836240 kubelet[2808]: E0115 00:41:55.836225 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.840896 kubelet[2808]: E0115 00:41:55.839419 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.840991 kubelet[2808]: W0115 00:41:55.840971 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.841069 kubelet[2808]: E0115 00:41:55.841052 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.841952 kubelet[2808]: I0115 00:41:55.841700 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5de5824a-09ed-431d-8ba6-dbc85139b40f-varrun\") pod \"csi-node-driver-q7lg4\" (UID: \"5de5824a-09ed-431d-8ba6-dbc85139b40f\") " pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:41:55.846030 kubelet[2808]: E0115 00:41:55.845060 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.846030 kubelet[2808]: W0115 00:41:55.845166 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.846030 kubelet[2808]: E0115 00:41:55.845193 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.848474 kubelet[2808]: E0115 00:41:55.848403 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.848474 kubelet[2808]: W0115 00:41:55.848419 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.848922 kubelet[2808]: E0115 00:41:55.848578 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.850097 kubelet[2808]: E0115 00:41:55.849248 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.850097 kubelet[2808]: W0115 00:41:55.849354 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.852880 kubelet[2808]: E0115 00:41:55.852438 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.852880 kubelet[2808]: E0115 00:41:55.852550 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.852880 kubelet[2808]: W0115 00:41:55.852562 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.856078 kubelet[2808]: E0115 00:41:55.853523 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.860346 kubelet[2808]: E0115 00:41:55.860258 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.860346 kubelet[2808]: W0115 00:41:55.860274 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.860428 kubelet[2808]: E0115 00:41:55.860348 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.861891 kubelet[2808]: E0115 00:41:55.861298 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.862365 kubelet[2808]: W0115 00:41:55.862342 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.863235 kubelet[2808]: E0115 00:41:55.862952 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.867200 kubelet[2808]: E0115 00:41:55.866967 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.870016 kubelet[2808]: W0115 00:41:55.868448 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.870016 kubelet[2808]: E0115 00:41:55.869083 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.874201 kubelet[2808]: E0115 00:41:55.874183 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.875567 kubelet[2808]: W0115 00:41:55.875547 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.876518 kubelet[2808]: E0115 00:41:55.876165 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.876973 kubelet[2808]: E0115 00:41:55.876953 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.877954 kubelet[2808]: W0115 00:41:55.877928 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.878575 kubelet[2808]: E0115 00:41:55.878512 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.878988 kubelet[2808]: E0115 00:41:55.878956 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.878988 kubelet[2808]: W0115 00:41:55.878971 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.881002 kubelet[2808]: E0115 00:41:55.880101 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.881145 kubelet[2808]: I0115 00:41:55.881126 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5de5824a-09ed-431d-8ba6-dbc85139b40f-socket-dir\") pod \"csi-node-driver-q7lg4\" (UID: \"5de5824a-09ed-431d-8ba6-dbc85139b40f\") " pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:41:55.881321 kubelet[2808]: E0115 00:41:55.881308 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.881393 kubelet[2808]: W0115 00:41:55.881380 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.882353 kubelet[2808]: E0115 00:41:55.882282 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.882442 kubelet[2808]: E0115 00:41:55.882430 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.882554 kubelet[2808]: W0115 00:41:55.882500 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.882932 kubelet[2808]: E0115 00:41:55.882891 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.885396 kubelet[2808]: E0115 00:41:55.885086 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.885396 kubelet[2808]: W0115 00:41:55.885108 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.885396 kubelet[2808]: E0115 00:41:55.885169 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.885585 kubelet[2808]: E0115 00:41:55.885572 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.886298 kubelet[2808]: W0115 00:41:55.886281 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.887103 kubelet[2808]: E0115 00:41:55.887085 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.888061 kubelet[2808]: E0115 00:41:55.888043 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.888152 kubelet[2808]: W0115 00:41:55.888137 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.889060 kubelet[2808]: E0115 00:41:55.889041 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.889166 kubelet[2808]: E0115 00:41:55.889133 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.889252 kubelet[2808]: W0115 00:41:55.889235 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.889323 kubelet[2808]: E0115 00:41:55.889309 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.983159 kubelet[2808]: E0115 00:41:55.982923 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.983159 kubelet[2808]: W0115 00:41:55.982951 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.983159 kubelet[2808]: E0115 00:41:55.982975 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.987227 kubelet[2808]: E0115 00:41:55.987207 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.988347 kubelet[2808]: W0115 00:41:55.987336 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.988347 kubelet[2808]: E0115 00:41:55.987363 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.988517 kubelet[2808]: E0115 00:41:55.988501 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.988579 kubelet[2808]: W0115 00:41:55.988566 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.990512 kubelet[2808]: E0115 00:41:55.990305 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.992935 kubelet[2808]: E0115 00:41:55.992916 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.993083 kubelet[2808]: W0115 00:41:55.993065 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.995154 kubelet[2808]: E0115 00:41:55.993992 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.994254 systemd[1]: Started cri-containerd-82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32.scope - libcontainer container 82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32. Jan 15 00:41:55.996838 kubelet[2808]: E0115 00:41:55.996038 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.996838 kubelet[2808]: W0115 00:41:55.996052 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.997237 kubelet[2808]: E0115 00:41:55.997196 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.997365 kubelet[2808]: E0115 00:41:55.997351 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.997449 kubelet[2808]: W0115 00:41:55.997436 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.997937 kubelet[2808]: E0115 00:41:55.997844 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:55.999138 kubelet[2808]: E0115 00:41:55.999123 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:55.999228 kubelet[2808]: W0115 00:41:55.999210 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:55.999912 kubelet[2808]: E0115 00:41:55.999887 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.000204 kubelet[2808]: E0115 00:41:56.000175 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.000204 kubelet[2808]: W0115 00:41:56.000189 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.003264 kubelet[2808]: E0115 00:41:56.002115 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.003264 kubelet[2808]: E0115 00:41:56.002236 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.003264 kubelet[2808]: W0115 00:41:56.002244 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.003397 kubelet[2808]: E0115 00:41:56.003369 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.005213 kubelet[2808]: E0115 00:41:56.005198 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.006212 kubelet[2808]: W0115 00:41:56.005911 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.006212 kubelet[2808]: E0115 00:41:56.006047 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.009271 kubelet[2808]: E0115 00:41:56.008532 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.011554 kubelet[2808]: W0115 00:41:56.010582 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.012188 kubelet[2808]: E0115 00:41:56.011992 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.013872 kubelet[2808]: E0115 00:41:56.013315 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.013872 kubelet[2808]: W0115 00:41:56.013333 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.013872 kubelet[2808]: E0115 00:41:56.013461 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.015220 kubelet[2808]: E0115 00:41:56.014914 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.015220 kubelet[2808]: W0115 00:41:56.015018 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.016994 kubelet[2808]: E0115 00:41:56.016434 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.020285 kubelet[2808]: E0115 00:41:56.020111 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.020285 kubelet[2808]: W0115 00:41:56.020223 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.021157 kubelet[2808]: E0115 00:41:56.021026 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.021852 kubelet[2808]: E0115 00:41:56.021329 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.021852 kubelet[2808]: W0115 00:41:56.021438 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.021852 kubelet[2808]: E0115 00:41:56.021450 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.084022 kernel: kauditd_printk_skb: 48 callbacks suppressed Jan 15 00:41:56.084078 kernel: audit: type=1334 audit(1768437716.064:538): prog-id=154 op=LOAD Jan 15 00:41:56.064000 audit: BPF prog-id=154 op=LOAD Jan 15 00:41:56.084133 containerd[1637]: time="2026-01-15T00:41:56.071271723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f7468cdff-8n8n8,Uid:1166dcff-ef3a-4acc-ac62-384abf9db327,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad\"" Jan 15 00:41:56.084133 containerd[1637]: time="2026-01-15T00:41:56.079219501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 00:41:56.084217 kubelet[2808]: E0115 00:41:56.070117 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:41:56.084217 kubelet[2808]: W0115 00:41:56.070134 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:41:56.084217 kubelet[2808]: E0115 00:41:56.070151 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:41:56.084217 kubelet[2808]: E0115 00:41:56.075178 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:56.095000 audit: BPF prog-id=155 op=LOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.146916 kernel: audit: type=1334 audit(1768437716.095:539): prog-id=155 op=LOAD Jan 15 00:41:56.147064 kernel: audit: type=1300 audit(1768437716.095:539): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.182986 kernel: audit: type=1327 audit(1768437716.095:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: BPF prog-id=155 op=UNLOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.234897 kernel: audit: type=1334 audit(1768437716.095:540): prog-id=155 op=UNLOAD Jan 15 00:41:56.234957 kernel: audit: type=1300 audit(1768437716.095:540): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.290250 kernel: audit: type=1327 audit(1768437716.095:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.290439 kernel: audit: type=1334 audit(1768437716.095:541): prog-id=156 op=LOAD Jan 15 00:41:56.095000 audit: BPF prog-id=156 op=LOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.335021 kernel: audit: type=1300 audit(1768437716.095:541): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.378101 kernel: audit: type=1327 audit(1768437716.095:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: BPF prog-id=157 op=LOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000194218 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: BPF prog-id=157 op=UNLOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: BPF prog-id=156 op=UNLOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.095000 audit: BPF prog-id=158 op=LOAD Jan 15 00:41:56.095000 audit[3365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001946e8 a2=98 a3=0 items=0 ppid=3345 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:41:56.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613831616465323931353638306238646633336532333339656539 Jan 15 00:41:56.406444 containerd[1637]: time="2026-01-15T00:41:56.406027095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9mwn,Uid:be73e5eb-6296-4ba8-bd88-38c1e283ad81,Namespace:calico-system,Attempt:0,} returns sandbox id \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\"" Jan 15 00:41:56.411857 kubelet[2808]: E0115 00:41:56.408429 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:41:57.003331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66779843.mount: Deactivated successfully. Jan 15 00:41:57.039158 kubelet[2808]: E0115 00:41:57.038705 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:41:59.039990 kubelet[2808]: E0115 00:41:59.039568 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:41:59.806355 containerd[1637]: time="2026-01-15T00:41:59.806084008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:59.808311 containerd[1637]: time="2026-01-15T00:41:59.808218775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 15 00:41:59.813372 containerd[1637]: time="2026-01-15T00:41:59.812936185Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:59.817958 containerd[1637]: time="2026-01-15T00:41:59.817501854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:41:59.819045 containerd[1637]: time="2026-01-15T00:41:59.818430684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.739175757s" Jan 15 00:41:59.819045 containerd[1637]: time="2026-01-15T00:41:59.818573660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 00:41:59.824470 containerd[1637]: time="2026-01-15T00:41:59.824226658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 00:41:59.874897 containerd[1637]: time="2026-01-15T00:41:59.874573499Z" level=info msg="CreateContainer within sandbox \"cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 00:41:59.909925 containerd[1637]: time="2026-01-15T00:41:59.909045204Z" level=info msg="Container 6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:41:59.935473 containerd[1637]: time="2026-01-15T00:41:59.935240347Z" level=info msg="CreateContainer within sandbox \"cd8d86b62bc5b0ce9b2af21e47688c4134f4e6e46c899256fa1fedcde10d52ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3\"" Jan 15 00:41:59.938311 containerd[1637]: time="2026-01-15T00:41:59.938085399Z" level=info msg="StartContainer for \"6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3\"" Jan 15 00:41:59.948294 containerd[1637]: time="2026-01-15T00:41:59.943708407Z" level=info msg="connecting to shim 6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3" address="unix:///run/containerd/s/f9b126773f3e2f67efbf114d39ab70e05ee13e4fa15027c488fc61e8183fb228" protocol=ttrpc version=3 Jan 15 00:42:00.067219 systemd[1]: Started cri-containerd-6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3.scope - libcontainer container 6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3. Jan 15 00:42:00.131000 audit: BPF prog-id=159 op=LOAD Jan 15 00:42:00.134000 audit: BPF prog-id=160 op=LOAD Jan 15 00:42:00.134000 audit[3434]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.134000 audit: BPF prog-id=160 op=UNLOAD Jan 15 00:42:00.134000 audit[3434]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.134000 audit: BPF prog-id=161 op=LOAD Jan 15 00:42:00.134000 audit[3434]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.135000 audit: BPF prog-id=162 op=LOAD Jan 15 00:42:00.135000 audit[3434]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.135000 audit: BPF prog-id=162 op=UNLOAD Jan 15 00:42:00.135000 audit[3434]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.136000 audit: BPF prog-id=161 op=UNLOAD Jan 15 00:42:00.136000 audit[3434]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.136000 audit: BPF prog-id=163 op=LOAD Jan 15 00:42:00.136000 audit[3434]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=3257 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665383337316466356537336664373236616438306630653738616434 Jan 15 00:42:00.317152 containerd[1637]: time="2026-01-15T00:42:00.316575452Z" level=info msg="StartContainer for \"6e8371df5e73fd726ad80f0e78ad4e4f7442dafde80f0014ffe91aed4320bbd3\" returns successfully" Jan 15 00:42:00.433033 kubelet[2808]: E0115 00:42:00.432302 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:00.437980 kubelet[2808]: E0115 00:42:00.437542 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.437980 kubelet[2808]: W0115 00:42:00.437947 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.437980 kubelet[2808]: E0115 00:42:00.437966 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.441978 kubelet[2808]: E0115 00:42:00.441579 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.441978 kubelet[2808]: W0115 00:42:00.441955 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.441978 kubelet[2808]: E0115 00:42:00.441975 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.447184 kubelet[2808]: E0115 00:42:00.445968 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.447184 kubelet[2808]: W0115 00:42:00.446087 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.447184 kubelet[2808]: E0115 00:42:00.446107 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.451220 kubelet[2808]: E0115 00:42:00.451097 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.451220 kubelet[2808]: W0115 00:42:00.451203 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.451220 kubelet[2808]: E0115 00:42:00.451220 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.453216 kubelet[2808]: E0115 00:42:00.452395 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.453216 kubelet[2808]: W0115 00:42:00.452504 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.453216 kubelet[2808]: E0115 00:42:00.452519 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.456348 kubelet[2808]: E0115 00:42:00.456110 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.456348 kubelet[2808]: W0115 00:42:00.456215 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.456348 kubelet[2808]: E0115 00:42:00.456230 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.457239 kubelet[2808]: E0115 00:42:00.457203 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.457331 kubelet[2808]: W0115 00:42:00.457309 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.457415 kubelet[2808]: E0115 00:42:00.457398 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.459335 kubelet[2808]: E0115 00:42:00.459319 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.459524 kubelet[2808]: W0115 00:42:00.459392 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.459524 kubelet[2808]: E0115 00:42:00.459425 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.460421 kubelet[2808]: E0115 00:42:00.460348 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.460421 kubelet[2808]: W0115 00:42:00.460362 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.460421 kubelet[2808]: E0115 00:42:00.460375 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.461205 kubelet[2808]: E0115 00:42:00.461192 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.461264 kubelet[2808]: W0115 00:42:00.461253 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.461309 kubelet[2808]: E0115 00:42:00.461299 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.462332 kubelet[2808]: E0115 00:42:00.462318 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.462393 kubelet[2808]: W0115 00:42:00.462382 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.462451 kubelet[2808]: E0115 00:42:00.462439 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.464459 kubelet[2808]: E0115 00:42:00.463277 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.464459 kubelet[2808]: W0115 00:42:00.464423 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.464459 kubelet[2808]: E0115 00:42:00.464440 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.467081 kubelet[2808]: E0115 00:42:00.466570 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.467081 kubelet[2808]: W0115 00:42:00.466587 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.467210 kubelet[2808]: E0115 00:42:00.467196 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.468341 kubelet[2808]: E0115 00:42:00.468326 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.470337 kubelet[2808]: W0115 00:42:00.470313 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.470434 kubelet[2808]: E0115 00:42:00.470418 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.471589 kubelet[2808]: E0115 00:42:00.471573 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.473194 kubelet[2808]: W0115 00:42:00.472006 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.473194 kubelet[2808]: E0115 00:42:00.472025 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.475907 kubelet[2808]: E0115 00:42:00.475425 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.475907 kubelet[2808]: W0115 00:42:00.475443 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.475907 kubelet[2808]: E0115 00:42:00.475459 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.477911 kubelet[2808]: E0115 00:42:00.477344 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.477911 kubelet[2808]: W0115 00:42:00.477462 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.478421 kubelet[2808]: E0115 00:42:00.478217 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.479906 kubelet[2808]: E0115 00:42:00.479307 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.479906 kubelet[2808]: W0115 00:42:00.479427 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.479906 kubelet[2808]: E0115 00:42:00.479536 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.480900 kubelet[2808]: E0115 00:42:00.480482 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.480900 kubelet[2808]: W0115 00:42:00.480582 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.481411 kubelet[2808]: E0115 00:42:00.481110 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.482323 kubelet[2808]: E0115 00:42:00.482207 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.482323 kubelet[2808]: W0115 00:42:00.482305 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.482932 kubelet[2808]: E0115 00:42:00.482908 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.484119 kubelet[2808]: E0115 00:42:00.483892 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.484119 kubelet[2808]: W0115 00:42:00.483905 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.484119 kubelet[2808]: E0115 00:42:00.483992 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.484423 kubelet[2808]: E0115 00:42:00.484318 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.484423 kubelet[2808]: W0115 00:42:00.484419 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.488959 kubelet[2808]: E0115 00:42:00.484569 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.489351 kubelet[2808]: E0115 00:42:00.489233 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.489351 kubelet[2808]: W0115 00:42:00.489347 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.490130 kubelet[2808]: E0115 00:42:00.490025 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.490525 kubelet[2808]: E0115 00:42:00.490399 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.490525 kubelet[2808]: W0115 00:42:00.490505 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.491073 kubelet[2808]: E0115 00:42:00.490968 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.492344 kubelet[2808]: E0115 00:42:00.492211 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.492344 kubelet[2808]: W0115 00:42:00.492333 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.495271 kubelet[2808]: E0115 00:42:00.493048 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.495979 kubelet[2808]: E0115 00:42:00.495506 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.497258 kubelet[2808]: W0115 00:42:00.497135 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.497591 kubelet[2808]: E0115 00:42:00.497472 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.499292 kubelet[2808]: E0115 00:42:00.498981 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.499292 kubelet[2808]: W0115 00:42:00.498996 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.503161 kubelet[2808]: E0115 00:42:00.502912 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.504492 kubelet[2808]: E0115 00:42:00.503593 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.504492 kubelet[2808]: W0115 00:42:00.504372 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.507487 kubelet[2808]: E0115 00:42:00.506053 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.509189 kubelet[2808]: E0115 00:42:00.509107 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.509189 kubelet[2808]: W0115 00:42:00.509119 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.510012 kubelet[2808]: E0115 00:42:00.509319 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.510012 kubelet[2808]: E0115 00:42:00.509394 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.510012 kubelet[2808]: W0115 00:42:00.509402 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.510144 kubelet[2808]: E0115 00:42:00.510061 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.513441 kubelet[2808]: E0115 00:42:00.512888 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.513441 kubelet[2808]: W0115 00:42:00.512902 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.514200 kubelet[2808]: E0115 00:42:00.513958 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.515283 kubelet[2808]: E0115 00:42:00.515158 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.515283 kubelet[2808]: W0115 00:42:00.515170 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.515283 kubelet[2808]: E0115 00:42:00.515181 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:00.516461 kubelet[2808]: E0115 00:42:00.516399 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:00.516461 kubelet[2808]: W0115 00:42:00.516412 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:00.516461 kubelet[2808]: E0115 00:42:00.516422 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.041288 kubelet[2808]: E0115 00:42:01.041145 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:01.443333 kubelet[2808]: I0115 00:42:01.443216 2808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 00:42:01.446883 kubelet[2808]: E0115 00:42:01.444706 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:01.496294 kubelet[2808]: E0115 00:42:01.495533 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.496294 kubelet[2808]: W0115 00:42:01.495559 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.496294 kubelet[2808]: E0115 00:42:01.496062 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.498000 kubelet[2808]: E0115 00:42:01.497212 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.498000 kubelet[2808]: W0115 00:42:01.497229 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.498000 kubelet[2808]: E0115 00:42:01.497248 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.502115 kubelet[2808]: E0115 00:42:01.501952 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.502115 kubelet[2808]: W0115 00:42:01.502073 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.502115 kubelet[2808]: E0115 00:42:01.502092 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.503345 kubelet[2808]: E0115 00:42:01.503137 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.503345 kubelet[2808]: W0115 00:42:01.503248 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.503345 kubelet[2808]: E0115 00:42:01.503263 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.505368 kubelet[2808]: E0115 00:42:01.504599 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.505368 kubelet[2808]: W0115 00:42:01.504978 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.505368 kubelet[2808]: E0115 00:42:01.504990 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.507158 kubelet[2808]: E0115 00:42:01.506916 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.507158 kubelet[2808]: W0115 00:42:01.507030 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.507158 kubelet[2808]: E0115 00:42:01.507044 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.511028 kubelet[2808]: E0115 00:42:01.510929 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.511089 kubelet[2808]: W0115 00:42:01.511032 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.511089 kubelet[2808]: E0115 00:42:01.511046 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.512156 kubelet[2808]: E0115 00:42:01.512111 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.512156 kubelet[2808]: W0115 00:42:01.512123 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.512156 kubelet[2808]: E0115 00:42:01.512136 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.512551 kubelet[2808]: E0115 00:42:01.512428 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.512551 kubelet[2808]: W0115 00:42:01.512438 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.512551 kubelet[2808]: E0115 00:42:01.512449 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.514225 kubelet[2808]: E0115 00:42:01.513481 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.525217 kubelet[2808]: W0115 00:42:01.517375 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.525516 kubelet[2808]: E0115 00:42:01.525341 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.527188 kubelet[2808]: E0115 00:42:01.527072 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.527254 kubelet[2808]: W0115 00:42:01.527189 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.527254 kubelet[2808]: E0115 00:42:01.527205 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.528398 kubelet[2808]: E0115 00:42:01.528073 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.528398 kubelet[2808]: W0115 00:42:01.528088 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.528398 kubelet[2808]: E0115 00:42:01.528101 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.534505 kubelet[2808]: E0115 00:42:01.534247 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.535117 kubelet[2808]: W0115 00:42:01.534505 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.535117 kubelet[2808]: E0115 00:42:01.535009 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.536965 kubelet[2808]: E0115 00:42:01.535336 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.536965 kubelet[2808]: W0115 00:42:01.535352 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.536965 kubelet[2808]: E0115 00:42:01.535364 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.536965 kubelet[2808]: E0115 00:42:01.535977 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.536965 kubelet[2808]: W0115 00:42:01.535989 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.536965 kubelet[2808]: E0115 00:42:01.536001 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.538409 kubelet[2808]: E0115 00:42:01.538296 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.538409 kubelet[2808]: W0115 00:42:01.538408 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.538486 kubelet[2808]: E0115 00:42:01.538422 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.539505 kubelet[2808]: E0115 00:42:01.539382 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.539505 kubelet[2808]: W0115 00:42:01.539499 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.540938 kubelet[2808]: E0115 00:42:01.539956 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.542976 kubelet[2808]: E0115 00:42:01.541902 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.542976 kubelet[2808]: W0115 00:42:01.542062 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.542976 kubelet[2808]: E0115 00:42:01.542579 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.546022 kubelet[2808]: E0115 00:42:01.545400 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.546090 kubelet[2808]: W0115 00:42:01.546034 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.548706 kubelet[2808]: E0115 00:42:01.548316 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.554149 kubelet[2808]: E0115 00:42:01.552250 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.554149 kubelet[2808]: W0115 00:42:01.552272 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.554149 kubelet[2808]: E0115 00:42:01.552290 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.559146 kubelet[2808]: E0115 00:42:01.557983 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.559146 kubelet[2808]: W0115 00:42:01.558060 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.559146 kubelet[2808]: E0115 00:42:01.558417 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.561529 kubelet[2808]: E0115 00:42:01.561408 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.561529 kubelet[2808]: W0115 00:42:01.561422 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.566411 kubelet[2808]: E0115 00:42:01.566388 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.566552 kubelet[2808]: W0115 00:42:01.566497 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.567286 kubelet[2808]: E0115 00:42:01.567266 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.569117 kubelet[2808]: E0115 00:42:01.569101 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.569218 kubelet[2808]: W0115 00:42:01.569198 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.571930 kubelet[2808]: E0115 00:42:01.570170 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.572144 kubelet[2808]: E0115 00:42:01.570283 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.572237 kubelet[2808]: W0115 00:42:01.572219 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.572316 kubelet[2808]: E0115 00:42:01.572296 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.574166 kubelet[2808]: E0115 00:42:01.563526 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.580154 kubelet[2808]: E0115 00:42:01.579449 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.581097 kubelet[2808]: W0115 00:42:01.580233 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.581332 kubelet[2808]: E0115 00:42:01.581311 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.584463 kubelet[2808]: E0115 00:42:01.584444 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.585214 kubelet[2808]: W0115 00:42:01.584968 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.585963 kubelet[2808]: E0115 00:42:01.585408 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.586037 kubelet[2808]: E0115 00:42:01.586012 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.586037 kubelet[2808]: W0115 00:42:01.586025 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.588091 kubelet[2808]: E0115 00:42:01.586968 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.588142 kubelet[2808]: E0115 00:42:01.588095 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.588142 kubelet[2808]: W0115 00:42:01.588107 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.588482 kubelet[2808]: E0115 00:42:01.588283 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.590066 kubelet[2808]: E0115 00:42:01.590049 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.590436 kubelet[2808]: W0115 00:42:01.590157 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.591016 kubelet[2808]: E0115 00:42:01.590970 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.592225 kubelet[2808]: E0115 00:42:01.591217 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.592225 kubelet[2808]: W0115 00:42:01.591346 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.592225 kubelet[2808]: E0115 00:42:01.591552 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.593693 kubelet[2808]: E0115 00:42:01.593193 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.593693 kubelet[2808]: W0115 00:42:01.593208 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.593693 kubelet[2808]: E0115 00:42:01.593226 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.593693 kubelet[2808]: E0115 00:42:01.593513 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 00:42:01.593693 kubelet[2808]: W0115 00:42:01.593525 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 00:42:01.593693 kubelet[2808]: E0115 00:42:01.593537 2808 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 00:42:01.646971 containerd[1637]: time="2026-01-15T00:42:01.646554167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:01.650926 containerd[1637]: time="2026-01-15T00:42:01.650312085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 15 00:42:01.653878 containerd[1637]: time="2026-01-15T00:42:01.653483769Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:01.658885 containerd[1637]: time="2026-01-15T00:42:01.658299617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:01.661891 containerd[1637]: time="2026-01-15T00:42:01.659590646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.835217805s" Jan 15 00:42:01.661891 containerd[1637]: time="2026-01-15T00:42:01.661533766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 00:42:01.669156 containerd[1637]: time="2026-01-15T00:42:01.669126301Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 00:42:01.752027 containerd[1637]: time="2026-01-15T00:42:01.751297476Z" level=info msg="Container d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:42:01.757156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249467818.mount: Deactivated successfully. Jan 15 00:42:01.783004 containerd[1637]: time="2026-01-15T00:42:01.782575626Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87\"" Jan 15 00:42:01.793853 containerd[1637]: time="2026-01-15T00:42:01.789455584Z" level=info msg="StartContainer for \"d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87\"" Jan 15 00:42:01.797481 containerd[1637]: time="2026-01-15T00:42:01.797449385Z" level=info msg="connecting to shim d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87" address="unix:///run/containerd/s/f01d80b87fd34682a394dec687a2ab7b3228f3ae7648ab5e25ea943e06c47fa9" protocol=ttrpc version=3 Jan 15 00:42:01.868415 systemd[1]: Started cri-containerd-d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87.scope - libcontainer container d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87. Jan 15 00:42:02.009000 audit: BPF prog-id=164 op=LOAD Jan 15 00:42:02.019590 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 15 00:42:02.019967 kernel: audit: type=1334 audit(1768437722.009:554): prog-id=164 op=LOAD Jan 15 00:42:02.009000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.069298 kernel: audit: type=1300 audit(1768437722.009:554): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.069947 kernel: audit: type=1327 audit(1768437722.009:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.010000 audit: BPF prog-id=165 op=LOAD Jan 15 00:42:02.121403 kernel: audit: type=1334 audit(1768437722.010:555): prog-id=165 op=LOAD Jan 15 00:42:02.010000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.200405 kernel: audit: type=1300 audit(1768437722.010:555): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.200500 kernel: audit: type=1327 audit(1768437722.010:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.201219 kernel: audit: type=1334 audit(1768437722.010:556): prog-id=165 op=UNLOAD Jan 15 00:42:02.010000 audit: BPF prog-id=165 op=UNLOAD Jan 15 00:42:02.010000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.249176 containerd[1637]: time="2026-01-15T00:42:02.249005445Z" level=info msg="StartContainer for \"d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87\" returns successfully" Jan 15 00:42:02.257891 kernel: audit: type=1300 audit(1768437722.010:556): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.257991 kernel: audit: type=1327 audit(1768437722.010:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.297539 kernel: audit: type=1334 audit(1768437722.010:557): prog-id=164 op=UNLOAD Jan 15 00:42:02.010000 audit: BPF prog-id=164 op=UNLOAD Jan 15 00:42:02.010000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.010000 audit: BPF prog-id=166 op=LOAD Jan 15 00:42:02.010000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3345 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:02.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430356363346137373332656664303130646334323333333533613533 Jan 15 00:42:02.332918 systemd[1]: cri-containerd-d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87.scope: Deactivated successfully. Jan 15 00:42:02.339000 audit: BPF prog-id=166 op=UNLOAD Jan 15 00:42:02.361446 containerd[1637]: time="2026-01-15T00:42:02.361202242Z" level=info msg="received container exit event container_id:\"d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87\" id:\"d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87\" pid:3559 exited_at:{seconds:1768437722 nanos:356298865}" Jan 15 00:42:02.461521 kubelet[2808]: E0115 00:42:02.461001 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:02.510265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d05cc4a7732efd010dc4233353a53d82fe45ca0c9f623e8234ef9905a67dce87-rootfs.mount: Deactivated successfully. Jan 15 00:42:02.533903 kubelet[2808]: I0115 00:42:02.533359 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f7468cdff-8n8n8" podStartSLOduration=4.788311535 podStartE2EDuration="8.533338596s" podCreationTimestamp="2026-01-15 00:41:54 +0000 UTC" firstStartedPulling="2026-01-15 00:41:56.078324294 +0000 UTC m=+28.409359698" lastFinishedPulling="2026-01-15 00:41:59.823351354 +0000 UTC m=+32.154386759" observedRunningTime="2026-01-15 00:42:00.500420324 +0000 UTC m=+32.831455888" watchObservedRunningTime="2026-01-15 00:42:02.533338596 +0000 UTC m=+34.864374000" Jan 15 00:42:03.040287 kubelet[2808]: E0115 00:42:03.040060 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:03.476114 kubelet[2808]: E0115 00:42:03.476004 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:03.480235 containerd[1637]: time="2026-01-15T00:42:03.479556436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 00:42:05.042703 kubelet[2808]: E0115 00:42:05.042235 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:07.040005 kubelet[2808]: E0115 00:42:07.039348 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:09.040259 kubelet[2808]: E0115 00:42:09.040116 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:09.379468 containerd[1637]: time="2026-01-15T00:42:09.378275331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:09.382444 containerd[1637]: time="2026-01-15T00:42:09.382272531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 00:42:09.386284 containerd[1637]: time="2026-01-15T00:42:09.386014601Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:09.398995 containerd[1637]: time="2026-01-15T00:42:09.398691566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:09.400447 containerd[1637]: time="2026-01-15T00:42:09.399443357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.919724568s" Jan 15 00:42:09.400447 containerd[1637]: time="2026-01-15T00:42:09.399707038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 00:42:09.410542 containerd[1637]: time="2026-01-15T00:42:09.410506221Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 00:42:09.441889 containerd[1637]: time="2026-01-15T00:42:09.441056322Z" level=info msg="Container 31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:42:09.467930 containerd[1637]: time="2026-01-15T00:42:09.467680057Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06\"" Jan 15 00:42:09.471939 containerd[1637]: time="2026-01-15T00:42:09.471440203Z" level=info msg="StartContainer for \"31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06\"" Jan 15 00:42:09.476392 containerd[1637]: time="2026-01-15T00:42:09.475477154Z" level=info msg="connecting to shim 31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06" address="unix:///run/containerd/s/f01d80b87fd34682a394dec687a2ab7b3228f3ae7648ab5e25ea943e06c47fa9" protocol=ttrpc version=3 Jan 15 00:42:09.569068 systemd[1]: Started cri-containerd-31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06.scope - libcontainer container 31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06. Jan 15 00:42:09.705000 audit: BPF prog-id=167 op=LOAD Jan 15 00:42:09.715272 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 15 00:42:09.715369 kernel: audit: type=1334 audit(1768437729.705:560): prog-id=167 op=LOAD Jan 15 00:42:09.705000 audit[3609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.766216 kernel: audit: type=1300 audit(1768437729.705:560): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.766344 kernel: audit: type=1327 audit(1768437729.705:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.810181 kernel: audit: type=1334 audit(1768437729.706:561): prog-id=168 op=LOAD Jan 15 00:42:09.706000 audit: BPF prog-id=168 op=LOAD Jan 15 00:42:09.822068 kernel: audit: type=1300 audit(1768437729.706:561): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.706000 audit[3609]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.919113 kernel: audit: type=1327 audit(1768437729.706:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.706000 audit: BPF prog-id=168 op=UNLOAD Jan 15 00:42:09.931134 kernel: audit: type=1334 audit(1768437729.706:562): prog-id=168 op=UNLOAD Jan 15 00:42:09.931225 kernel: audit: type=1300 audit(1768437729.706:562): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.706000 audit[3609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.973161 containerd[1637]: time="2026-01-15T00:42:09.972502570Z" level=info msg="StartContainer for \"31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06\" returns successfully" Jan 15 00:42:09.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:10.026011 kernel: audit: type=1327 audit(1768437729.706:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.706000 audit: BPF prog-id=167 op=UNLOAD Jan 15 00:42:09.706000 audit[3609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:10.039234 kernel: audit: type=1334 audit(1768437729.706:563): prog-id=167 op=UNLOAD Jan 15 00:42:09.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:09.706000 audit: BPF prog-id=169 op=LOAD Jan 15 00:42:09.706000 audit[3609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3345 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:09.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331623334666230363232326261393132323238613933336563393961 Jan 15 00:42:10.595385 kubelet[2808]: E0115 00:42:10.593441 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:11.040107 kubelet[2808]: E0115 00:42:11.039131 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:11.599393 kubelet[2808]: E0115 00:42:11.599135 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:12.233704 systemd[1]: cri-containerd-31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06.scope: Deactivated successfully. Jan 15 00:42:12.234455 systemd[1]: cri-containerd-31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06.scope: Consumed 2.578s CPU time, 181.4M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 15 00:42:12.239000 audit: BPF prog-id=169 op=UNLOAD Jan 15 00:42:12.270333 containerd[1637]: time="2026-01-15T00:42:12.270085767Z" level=info msg="received container exit event container_id:\"31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06\" id:\"31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06\" pid:3622 exited_at:{seconds:1768437732 nanos:241007274}" Jan 15 00:42:12.385145 kubelet[2808]: I0115 00:42:12.384684 2808 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 00:42:12.468164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b34fb06222ba912228a933ec99abdc59a4c0dad91e38acc0bebbe9f44bdd06-rootfs.mount: Deactivated successfully. Jan 15 00:42:12.579693 systemd[1]: Created slice kubepods-besteffort-pod4a17e4fb_67a0_4d0e_b72d_590a1df87758.slice - libcontainer container kubepods-besteffort-pod4a17e4fb_67a0_4d0e_b72d_590a1df87758.slice. Jan 15 00:42:12.615109 kubelet[2808]: I0115 00:42:12.614456 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1429bbd4-fe5b-4951-85dc-5a892fa35b68-goldmane-ca-bundle\") pod \"goldmane-666569f655-9ztx8\" (UID: \"1429bbd4-fe5b-4951-85dc-5a892fa35b68\") " pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:12.619915 kubelet[2808]: I0115 00:42:12.619656 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znh8k\" (UniqueName: \"kubernetes.io/projected/2f1337fc-8e0d-4906-b1f0-90d0896b3f07-kube-api-access-znh8k\") pod \"calico-apiserver-5c9bd68d8f-tstqw\" (UID: \"2f1337fc-8e0d-4906-b1f0-90d0896b3f07\") " pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:12.619915 kubelet[2808]: I0115 00:42:12.619701 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a17e4fb-67a0-4d0e-b72d-590a1df87758-tigera-ca-bundle\") pod \"calico-kube-controllers-c6d667669-rqqnw\" (UID: \"4a17e4fb-67a0-4d0e-b72d-590a1df87758\") " pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:12.622842 kubelet[2808]: I0115 00:42:12.622143 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzpc\" (UniqueName: \"kubernetes.io/projected/1429bbd4-fe5b-4951-85dc-5a892fa35b68-kube-api-access-4rzpc\") pod \"goldmane-666569f655-9ztx8\" (UID: \"1429bbd4-fe5b-4951-85dc-5a892fa35b68\") " pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:12.622842 kubelet[2808]: I0115 00:42:12.622184 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krx4z\" (UniqueName: \"kubernetes.io/projected/4a17e4fb-67a0-4d0e-b72d-590a1df87758-kube-api-access-krx4z\") pod \"calico-kube-controllers-c6d667669-rqqnw\" (UID: \"4a17e4fb-67a0-4d0e-b72d-590a1df87758\") " pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:12.622842 kubelet[2808]: I0115 00:42:12.622212 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1429bbd4-fe5b-4951-85dc-5a892fa35b68-config\") pod \"goldmane-666569f655-9ztx8\" (UID: \"1429bbd4-fe5b-4951-85dc-5a892fa35b68\") " pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:12.622842 kubelet[2808]: I0115 00:42:12.622240 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-backend-key-pair\") pod \"whisker-6564dcd6c8-7fkjs\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:12.622842 kubelet[2808]: I0115 00:42:12.622263 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgtn\" (UniqueName: \"kubernetes.io/projected/7d55d038-4c75-4fb7-a8e2-55a6787772a7-kube-api-access-jmgtn\") pod \"coredns-668d6bf9bc-j5chh\" (UID: \"7d55d038-4c75-4fb7-a8e2-55a6787772a7\") " pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:12.623070 kubelet[2808]: I0115 00:42:12.622289 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f1337fc-8e0d-4906-b1f0-90d0896b3f07-calico-apiserver-certs\") pod \"calico-apiserver-5c9bd68d8f-tstqw\" (UID: \"2f1337fc-8e0d-4906-b1f0-90d0896b3f07\") " pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:12.623070 kubelet[2808]: I0115 00:42:12.622312 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88-calico-apiserver-certs\") pod \"calico-apiserver-5c9bd68d8f-cmczc\" (UID: \"e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88\") " pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:12.623070 kubelet[2808]: I0115 00:42:12.622336 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkmj\" (UniqueName: \"kubernetes.io/projected/89945528-1586-4a50-85f4-e1cc83f6a348-kube-api-access-cfkmj\") pod \"whisker-6564dcd6c8-7fkjs\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:12.623070 kubelet[2808]: I0115 00:42:12.622357 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d55d038-4c75-4fb7-a8e2-55a6787772a7-config-volume\") pod \"coredns-668d6bf9bc-j5chh\" (UID: \"7d55d038-4c75-4fb7-a8e2-55a6787772a7\") " pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:12.623070 kubelet[2808]: I0115 00:42:12.622379 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7f4\" (UniqueName: \"kubernetes.io/projected/8a228c66-9272-4272-aa48-cfa8211b2a62-kube-api-access-ld7f4\") pod \"coredns-668d6bf9bc-mts8m\" (UID: \"8a228c66-9272-4272-aa48-cfa8211b2a62\") " pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:12.623248 kubelet[2808]: I0115 00:42:12.622405 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb75m\" (UniqueName: \"kubernetes.io/projected/e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88-kube-api-access-bb75m\") pod \"calico-apiserver-5c9bd68d8f-cmczc\" (UID: \"e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88\") " pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:12.623248 kubelet[2808]: I0115 00:42:12.622426 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a228c66-9272-4272-aa48-cfa8211b2a62-config-volume\") pod \"coredns-668d6bf9bc-mts8m\" (UID: \"8a228c66-9272-4272-aa48-cfa8211b2a62\") " pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:12.623248 kubelet[2808]: I0115 00:42:12.622446 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-ca-bundle\") pod \"whisker-6564dcd6c8-7fkjs\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:12.623248 kubelet[2808]: I0115 00:42:12.622479 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1429bbd4-fe5b-4951-85dc-5a892fa35b68-goldmane-key-pair\") pod \"goldmane-666569f655-9ztx8\" (UID: \"1429bbd4-fe5b-4951-85dc-5a892fa35b68\") " pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:12.625469 systemd[1]: Created slice kubepods-besteffort-pod1429bbd4_fe5b_4951_85dc_5a892fa35b68.slice - libcontainer container kubepods-besteffort-pod1429bbd4_fe5b_4951_85dc_5a892fa35b68.slice. Jan 15 00:42:12.643968 systemd[1]: Created slice kubepods-besteffort-pode04d5d65_cdba_4eaf_b279_5b0e7c7a9f88.slice - libcontainer container kubepods-besteffort-pode04d5d65_cdba_4eaf_b279_5b0e7c7a9f88.slice. Jan 15 00:42:12.669318 systemd[1]: Created slice kubepods-besteffort-pod89945528_1586_4a50_85f4_e1cc83f6a348.slice - libcontainer container kubepods-besteffort-pod89945528_1586_4a50_85f4_e1cc83f6a348.slice. Jan 15 00:42:12.712219 systemd[1]: Created slice kubepods-burstable-pod8a228c66_9272_4272_aa48_cfa8211b2a62.slice - libcontainer container kubepods-burstable-pod8a228c66_9272_4272_aa48_cfa8211b2a62.slice. Jan 15 00:42:12.823689 systemd[1]: Created slice kubepods-burstable-pod7d55d038_4c75_4fb7_a8e2_55a6787772a7.slice - libcontainer container kubepods-burstable-pod7d55d038_4c75_4fb7_a8e2_55a6787772a7.slice. Jan 15 00:42:12.847997 systemd[1]: Created slice kubepods-besteffort-pod2f1337fc_8e0d_4906_b1f0_90d0896b3f07.slice - libcontainer container kubepods-besteffort-pod2f1337fc_8e0d_4906_b1f0_90d0896b3f07.slice. Jan 15 00:42:12.860986 containerd[1637]: time="2026-01-15T00:42:12.860418988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:12.919960 containerd[1637]: time="2026-01-15T00:42:12.919705773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:12.937900 containerd[1637]: time="2026-01-15T00:42:12.937309451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:12.959462 containerd[1637]: time="2026-01-15T00:42:12.959048432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:13.010084 containerd[1637]: time="2026-01-15T00:42:13.009985252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564dcd6c8-7fkjs,Uid:89945528-1586-4a50-85f4-e1cc83f6a348,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:13.036706 kubelet[2808]: E0115 00:42:13.032978 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:13.054999 containerd[1637]: time="2026-01-15T00:42:13.047073183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:13.062668 systemd[1]: Created slice kubepods-besteffort-pod5de5824a_09ed_431d_8ba6_dbc85139b40f.slice - libcontainer container kubepods-besteffort-pod5de5824a_09ed_431d_8ba6_dbc85139b40f.slice. Jan 15 00:42:13.112066 containerd[1637]: time="2026-01-15T00:42:13.109326745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:13.136707 kubelet[2808]: E0115 00:42:13.136677 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:13.157112 containerd[1637]: time="2026-01-15T00:42:13.156481573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:13.671379 containerd[1637]: time="2026-01-15T00:42:13.671114851Z" level=error msg="Failed to destroy network for sandbox \"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.682246 systemd[1]: run-netns-cni\x2d70629cab\x2de8fb\x2d1a59\x2d3ef1\x2d5d093d2a0456.mount: Deactivated successfully. Jan 15 00:42:13.691237 kubelet[2808]: E0115 00:42:13.691213 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:13.698237 containerd[1637]: time="2026-01-15T00:42:13.697963497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 00:42:13.704077 containerd[1637]: time="2026-01-15T00:42:13.704041577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.705413 kubelet[2808]: E0115 00:42:13.705225 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.705413 kubelet[2808]: E0115 00:42:13.705293 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:13.705413 kubelet[2808]: E0115 00:42:13.705315 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:13.712249 kubelet[2808]: E0115 00:42:13.712036 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"605b5f3877fe524d5b6ba3b952200f72f899571981647317b82faf6610f89a12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:42:13.784195 containerd[1637]: time="2026-01-15T00:42:13.783255766Z" level=error msg="Failed to destroy network for sandbox \"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.803686 systemd[1]: run-netns-cni\x2d31dff977\x2d037d\x2d1394\x2da378\x2da606be5f2a45.mount: Deactivated successfully. Jan 15 00:42:13.835841 containerd[1637]: time="2026-01-15T00:42:13.835195889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564dcd6c8-7fkjs,Uid:89945528-1586-4a50-85f4-e1cc83f6a348,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.838037 kubelet[2808]: E0115 00:42:13.837705 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.838131 kubelet[2808]: E0115 00:42:13.838060 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:13.838131 kubelet[2808]: E0115 00:42:13.838091 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:13.838187 kubelet[2808]: E0115 00:42:13.838129 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6564dcd6c8-7fkjs_calico-system(89945528-1586-4a50-85f4-e1cc83f6a348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6564dcd6c8-7fkjs_calico-system(89945528-1586-4a50-85f4-e1cc83f6a348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ef9a267f8890c866b4d009af3d2b6c1e07c006a7b8b00a58b3b0dc734516715\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6564dcd6c8-7fkjs" podUID="89945528-1586-4a50-85f4-e1cc83f6a348" Jan 15 00:42:13.843907 containerd[1637]: time="2026-01-15T00:42:13.843466877Z" level=error msg="Failed to destroy network for sandbox \"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.852105 systemd[1]: run-netns-cni\x2d978cc6f1\x2d87d4\x2ddd92\x2ded62\x2d43da73d93540.mount: Deactivated successfully. Jan 15 00:42:13.865647 containerd[1637]: time="2026-01-15T00:42:13.865597435Z" level=error msg="Failed to destroy network for sandbox \"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.873300 containerd[1637]: time="2026-01-15T00:42:13.872641137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.874328 systemd[1]: run-netns-cni\x2d3df560bc\x2da499\x2dbcab\x2d7397\x2d837437466eb5.mount: Deactivated successfully. Jan 15 00:42:13.883449 kubelet[2808]: E0115 00:42:13.883327 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.883449 kubelet[2808]: E0115 00:42:13.883402 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:13.883449 kubelet[2808]: E0115 00:42:13.883423 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:13.883690 kubelet[2808]: E0115 00:42:13.883461 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aefa6c783e02a43045e381d6020a5af17ecebe015453551700443665192a84ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:42:13.898359 containerd[1637]: time="2026-01-15T00:42:13.898320669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.900059 kubelet[2808]: E0115 00:42:13.899639 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:13.900059 kubelet[2808]: E0115 00:42:13.899913 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:13.900059 kubelet[2808]: E0115 00:42:13.899948 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:13.900266 kubelet[2808]: E0115 00:42:13.900001 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53f9c002796fc57f32e8650526916bee5bbe7b38098864453c68a3a7748cbf29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mts8m" podUID="8a228c66-9272-4272-aa48-cfa8211b2a62" Jan 15 00:42:13.979965 containerd[1637]: time="2026-01-15T00:42:13.979626754Z" level=error msg="Failed to destroy network for sandbox \"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.021594 containerd[1637]: time="2026-01-15T00:42:14.020001308Z" level=error msg="Failed to destroy network for sandbox \"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.022409 containerd[1637]: time="2026-01-15T00:42:14.022343965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.023883 kubelet[2808]: E0115 00:42:14.023349 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.023883 kubelet[2808]: E0115 00:42:14.023639 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:14.023883 kubelet[2808]: E0115 00:42:14.023673 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:14.024064 kubelet[2808]: E0115 00:42:14.023953 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f02c28c46c6682613750ad4efad90ad059988d2177a97b4355a695fadc5b9d1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:42:14.044257 containerd[1637]: time="2026-01-15T00:42:14.044195185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.047329 kubelet[2808]: E0115 00:42:14.047295 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.047617 kubelet[2808]: E0115 00:42:14.047484 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:14.047928 kubelet[2808]: E0115 00:42:14.047698 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:14.048075 kubelet[2808]: E0115 00:42:14.048026 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc624b325889351e076cf8bd3ceeb6d2ee36039080d9bd53b94a37e8c5c38938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:42:14.049257 containerd[1637]: time="2026-01-15T00:42:14.049079932Z" level=error msg="Failed to destroy network for sandbox \"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.069029 containerd[1637]: time="2026-01-15T00:42:14.068420808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.070479 kubelet[2808]: E0115 00:42:14.070158 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.070479 kubelet[2808]: E0115 00:42:14.070207 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:14.070479 kubelet[2808]: E0115 00:42:14.070237 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:14.071308 kubelet[2808]: E0115 00:42:14.070281 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j5chh_kube-system(7d55d038-4c75-4fb7-a8e2-55a6787772a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j5chh_kube-system(7d55d038-4c75-4fb7-a8e2-55a6787772a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eb767429775fe37629f5190cbc67763b9136e12485d2442350332652615371c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j5chh" podUID="7d55d038-4c75-4fb7-a8e2-55a6787772a7" Jan 15 00:42:14.116194 containerd[1637]: time="2026-01-15T00:42:14.115377814Z" level=error msg="Failed to destroy network for sandbox \"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.158863 containerd[1637]: time="2026-01-15T00:42:14.158346878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.162143 kubelet[2808]: E0115 00:42:14.160964 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:14.162143 kubelet[2808]: E0115 00:42:14.161035 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:42:14.162143 kubelet[2808]: E0115 00:42:14.161061 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:42:14.163087 kubelet[2808]: E0115 00:42:14.161123 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6e1c741fae7a64ae41e8446308ff93de3a10ab78e263865ab31d03488bb4189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:14.468034 systemd[1]: run-netns-cni\x2dd27c1d7a\x2d9aab\x2d68bc\x2da478\x2d79ad0202f801.mount: Deactivated successfully. Jan 15 00:42:14.468241 systemd[1]: run-netns-cni\x2da33d2833\x2d96cf\x2dc5e2\x2d8b56\x2dec6843c658c9.mount: Deactivated successfully. Jan 15 00:42:14.468316 systemd[1]: run-netns-cni\x2dc2199e89\x2d0aef\x2da933\x2d6b80\x2d356cd84b692e.mount: Deactivated successfully. Jan 15 00:42:14.468387 systemd[1]: run-netns-cni\x2d0a9e15e3\x2d56a5\x2d7a7e\x2d9bb6\x2d37d633743700.mount: Deactivated successfully. Jan 15 00:42:15.549626 kubelet[2808]: I0115 00:42:15.548980 2808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 00:42:15.550442 kubelet[2808]: E0115 00:42:15.550073 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:15.684000 audit[3927]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:15.699463 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 15 00:42:15.699680 kernel: audit: type=1325 audit(1768437735.684:566): table=filter:121 family=2 entries=21 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:15.706400 kubelet[2808]: E0115 00:42:15.706357 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:15.684000 audit[3927]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4f0b7540 a2=0 a3=7ffc4f0b752c items=0 ppid=2968 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:15.771346 kernel: audit: type=1300 audit(1768437735.684:566): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4f0b7540 a2=0 a3=7ffc4f0b752c items=0 ppid=2968 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:15.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:15.818980 kernel: audit: type=1327 audit(1768437735.684:566): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:15.822000 audit[3927]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:15.870465 kernel: audit: type=1325 audit(1768437735.822:567): table=nat:122 family=2 entries=19 op=nft_register_chain pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:15.870979 kernel: audit: type=1300 audit(1768437735.822:567): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4f0b7540 a2=0 a3=7ffc4f0b752c items=0 ppid=2968 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:15.822000 audit[3927]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4f0b7540 a2=0 a3=7ffc4f0b752c items=0 ppid=2968 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:15.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:15.924046 kernel: audit: type=1327 audit(1768437735.822:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:25.063203 kubelet[2808]: E0115 00:42:25.063056 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:25.072297 containerd[1637]: time="2026-01-15T00:42:25.072247314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:25.372178 containerd[1637]: time="2026-01-15T00:42:25.368238141Z" level=error msg="Failed to destroy network for sandbox \"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:25.379307 systemd[1]: run-netns-cni\x2dc58b8e96\x2d2ab7\x2d841a\x2de7a8\x2d1a83423b7bd0.mount: Deactivated successfully. Jan 15 00:42:25.433639 containerd[1637]: time="2026-01-15T00:42:25.433136017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:25.438201 kubelet[2808]: E0115 00:42:25.437142 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:25.438201 kubelet[2808]: E0115 00:42:25.437688 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:25.438201 kubelet[2808]: E0115 00:42:25.437931 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:25.438400 kubelet[2808]: E0115 00:42:25.437992 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b611b955d87994007126344b7521e72101039020ca9782ada60bf378e8c9b952\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mts8m" podUID="8a228c66-9272-4272-aa48-cfa8211b2a62" Jan 15 00:42:26.051357 containerd[1637]: time="2026-01-15T00:42:26.051160806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:26.406386 containerd[1637]: time="2026-01-15T00:42:26.405611641Z" level=error msg="Failed to destroy network for sandbox \"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:26.413970 systemd[1]: run-netns-cni\x2d998fc506\x2d0f65\x2ddff3\x2dbe60\x2d9db450014010.mount: Deactivated successfully. Jan 15 00:42:26.422061 containerd[1637]: time="2026-01-15T00:42:26.422014162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:26.425280 kubelet[2808]: E0115 00:42:26.425043 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:26.425280 kubelet[2808]: E0115 00:42:26.425215 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:26.425280 kubelet[2808]: E0115 00:42:26.425248 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" Jan 15 00:42:26.426109 kubelet[2808]: E0115 00:42:26.425303 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9c7e8062441126df24b1adce93c83f54eaa4f4c0ff3714b0ec729d14129e605\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:42:27.042106 containerd[1637]: time="2026-01-15T00:42:27.041276961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:27.042106 containerd[1637]: time="2026-01-15T00:42:27.042059205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564dcd6c8-7fkjs,Uid:89945528-1586-4a50-85f4-e1cc83f6a348,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:27.403942 containerd[1637]: time="2026-01-15T00:42:27.398283318Z" level=error msg="Failed to destroy network for sandbox \"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.407585 systemd[1]: run-netns-cni\x2d56b29788\x2d69dd\x2da980\x2de207\x2dc33722f9d08f.mount: Deactivated successfully. Jan 15 00:42:27.428922 containerd[1637]: time="2026-01-15T00:42:27.427401429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6564dcd6c8-7fkjs,Uid:89945528-1586-4a50-85f4-e1cc83f6a348,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.430198 kubelet[2808]: E0115 00:42:27.428163 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.430198 kubelet[2808]: E0115 00:42:27.428242 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:27.430198 kubelet[2808]: E0115 00:42:27.428272 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6564dcd6c8-7fkjs" Jan 15 00:42:27.431567 kubelet[2808]: E0115 00:42:27.428547 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6564dcd6c8-7fkjs_calico-system(89945528-1586-4a50-85f4-e1cc83f6a348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6564dcd6c8-7fkjs_calico-system(89945528-1586-4a50-85f4-e1cc83f6a348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a414459a05f4d197ebd1306d6fb5109c5cf5bd9a25177f3c483ef39f898b8a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6564dcd6c8-7fkjs" podUID="89945528-1586-4a50-85f4-e1cc83f6a348" Jan 15 00:42:27.471953 containerd[1637]: time="2026-01-15T00:42:27.471170681Z" level=error msg="Failed to destroy network for sandbox \"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.476112 systemd[1]: run-netns-cni\x2df64edbd7\x2d72ab\x2d3a9b\x2d0bae\x2dc552be1f3d7d.mount: Deactivated successfully. Jan 15 00:42:27.525414 containerd[1637]: time="2026-01-15T00:42:27.525246584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.527138 kubelet[2808]: E0115 00:42:27.526904 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:27.527138 kubelet[2808]: E0115 00:42:27.526981 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:42:27.527138 kubelet[2808]: E0115 00:42:27.527006 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7lg4" Jan 15 00:42:27.527997 kubelet[2808]: E0115 00:42:27.527956 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b60d210c34b641d6aa5bbc275a83a7b8c53492e25bd707a16d0a71f0f8715d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:28.066422 containerd[1637]: time="2026-01-15T00:42:28.064545719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:28.066422 containerd[1637]: time="2026-01-15T00:42:28.066288827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:28.498222 containerd[1637]: time="2026-01-15T00:42:28.497298885Z" level=error msg="Failed to destroy network for sandbox \"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.498222 containerd[1637]: time="2026-01-15T00:42:28.497032841Z" level=error msg="Failed to destroy network for sandbox \"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.509595 systemd[1]: run-netns-cni\x2d466794ff\x2d1db4\x2d4db5\x2d9abd\x2d0bff0437b194.mount: Deactivated successfully. Jan 15 00:42:28.510039 systemd[1]: run-netns-cni\x2de6c6b989\x2dff86\x2d5f7e\x2ddb77\x2d6a3636b8fe6c.mount: Deactivated successfully. Jan 15 00:42:28.510626 containerd[1637]: time="2026-01-15T00:42:28.510259653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.515940 kubelet[2808]: E0115 00:42:28.515690 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.520900 kubelet[2808]: E0115 00:42:28.517177 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:28.520900 kubelet[2808]: E0115 00:42:28.517222 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9ztx8" Jan 15 00:42:28.520900 kubelet[2808]: E0115 00:42:28.517291 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37830b7f6f612fba48ec0cfcb3683299a05413ab5f97f142d85bc34299e7a9d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:42:28.522571 containerd[1637]: time="2026-01-15T00:42:28.519189160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.523052 kubelet[2808]: E0115 00:42:28.520083 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:28.523052 kubelet[2808]: E0115 00:42:28.520147 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:28.523052 kubelet[2808]: E0115 00:42:28.520172 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" Jan 15 00:42:28.523167 kubelet[2808]: E0115 00:42:28.520324 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cab38ade394ce23e2a249f17541f1515deecdd5f681ad8328f6f0c5847fe9f61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:42:29.042106 kubelet[2808]: E0115 00:42:29.040698 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:29.044993 containerd[1637]: time="2026-01-15T00:42:29.043281341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:29.044993 containerd[1637]: time="2026-01-15T00:42:29.044964806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:29.435171 containerd[1637]: time="2026-01-15T00:42:29.434049101Z" level=error msg="Failed to destroy network for sandbox \"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.441179 systemd[1]: run-netns-cni\x2d09a79eae\x2d2722\x2dab84\x2d768c\x2d3d622c0d46a9.mount: Deactivated successfully. Jan 15 00:42:29.451173 containerd[1637]: time="2026-01-15T00:42:29.450707579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.451637 kubelet[2808]: E0115 00:42:29.451280 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.451944 kubelet[2808]: E0115 00:42:29.451648 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:29.451944 kubelet[2808]: E0115 00:42:29.451678 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j5chh" Jan 15 00:42:29.452050 kubelet[2808]: E0115 00:42:29.452018 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j5chh_kube-system(7d55d038-4c75-4fb7-a8e2-55a6787772a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j5chh_kube-system(7d55d038-4c75-4fb7-a8e2-55a6787772a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1213380f538e46d37bd1e47eb1e7e512bc6f9cf7644d83cfb30f30a9741a79bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j5chh" podUID="7d55d038-4c75-4fb7-a8e2-55a6787772a7" Jan 15 00:42:29.539303 containerd[1637]: time="2026-01-15T00:42:29.539244845Z" level=error msg="Failed to destroy network for sandbox \"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.552684 systemd[1]: run-netns-cni\x2d15067653\x2d35ae\x2d3346\x2d9807\x2db9899651fc55.mount: Deactivated successfully. Jan 15 00:42:29.569629 containerd[1637]: time="2026-01-15T00:42:29.569282416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.570328 kubelet[2808]: E0115 00:42:29.570274 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:29.571533 kubelet[2808]: E0115 00:42:29.570341 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:29.571533 kubelet[2808]: E0115 00:42:29.570367 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" Jan 15 00:42:29.571533 kubelet[2808]: E0115 00:42:29.570624 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19103d9c4049ed34d456f4b8ac8323e08c3001fe375b0eccfafe351a0a8d831b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:42:36.070969 kubelet[2808]: E0115 00:42:36.069291 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:37.042226 kubelet[2808]: E0115 00:42:37.042188 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:37.048929 containerd[1637]: time="2026-01-15T00:42:37.048645464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:37.282560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569503237.mount: Deactivated successfully. Jan 15 00:42:37.362992 containerd[1637]: time="2026-01-15T00:42:37.361318980Z" level=error msg="Failed to destroy network for sandbox \"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:37.368143 systemd[1]: run-netns-cni\x2d76681637\x2d0614\x2d231f\x2d9ddd\x2d766e42dde29b.mount: Deactivated successfully. Jan 15 00:42:37.375321 containerd[1637]: time="2026-01-15T00:42:37.375056537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:37.380369 containerd[1637]: time="2026-01-15T00:42:37.380314415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 00:42:37.388513 containerd[1637]: time="2026-01-15T00:42:37.384305569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:37.391329 kubelet[2808]: E0115 00:42:37.389599 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 00:42:37.391329 kubelet[2808]: E0115 00:42:37.389675 2808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:37.391329 kubelet[2808]: E0115 00:42:37.389702 2808 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mts8m" Jan 15 00:42:37.392220 kubelet[2808]: E0115 00:42:37.389958 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mts8m_kube-system(8a228c66-9272-4272-aa48-cfa8211b2a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a32a0578f3895414449076dd2cccd63e3decff27a486e19605c3e0618e99f9d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mts8m" podUID="8a228c66-9272-4272-aa48-cfa8211b2a62" Jan 15 00:42:37.395577 containerd[1637]: time="2026-01-15T00:42:37.393686151Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:37.400326 containerd[1637]: time="2026-01-15T00:42:37.400251789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 00:42:37.402040 containerd[1637]: time="2026-01-15T00:42:37.401692089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 23.703681254s" Jan 15 00:42:37.402040 containerd[1637]: time="2026-01-15T00:42:37.401934411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 00:42:37.438325 containerd[1637]: time="2026-01-15T00:42:37.438277691Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 00:42:37.474883 containerd[1637]: time="2026-01-15T00:42:37.474637576Z" level=info msg="Container 447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:42:37.513987 containerd[1637]: time="2026-01-15T00:42:37.513666128Z" level=info msg="CreateContainer within sandbox \"82a81ade2915680b8df33e2339ee904075ab78d2072ed762067c76bd50fc8d32\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe\"" Jan 15 00:42:37.547153 containerd[1637]: time="2026-01-15T00:42:37.547115106Z" level=info msg="StartContainer for \"447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe\"" Jan 15 00:42:37.552112 containerd[1637]: time="2026-01-15T00:42:37.552003388Z" level=info msg="connecting to shim 447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe" address="unix:///run/containerd/s/f01d80b87fd34682a394dec687a2ab7b3228f3ae7648ab5e25ea943e06c47fa9" protocol=ttrpc version=3 Jan 15 00:42:37.803328 systemd[1]: Started cri-containerd-447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe.scope - libcontainer container 447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe. Jan 15 00:42:37.975000 audit: BPF prog-id=170 op=LOAD Jan 15 00:42:37.989138 kernel: audit: type=1334 audit(1768437757.975:568): prog-id=170 op=LOAD Jan 15 00:42:37.989282 kernel: audit: type=1300 audit(1768437757.975:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:38.084016 kernel: audit: type=1327 audit(1768437757.975:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:37.975000 audit: BPF prog-id=171 op=LOAD Jan 15 00:42:38.102689 kernel: audit: type=1334 audit(1768437757.975:569): prog-id=171 op=LOAD Jan 15 00:42:38.103060 kernel: audit: type=1300 audit(1768437757.975:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:38.197877 kernel: audit: type=1327 audit(1768437757.975:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:38.197998 kernel: audit: type=1334 audit(1768437757.975:570): prog-id=171 op=UNLOAD Jan 15 00:42:37.975000 audit: BPF prog-id=171 op=UNLOAD Jan 15 00:42:38.210176 kernel: audit: type=1300 audit(1768437757.975:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:38.253275 kernel: audit: type=1327 audit(1768437757.975:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:37.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:37.975000 audit: BPF prog-id=170 op=UNLOAD Jan 15 00:42:38.304627 kernel: audit: type=1334 audit(1768437757.975:571): prog-id=170 op=UNLOAD Jan 15 00:42:37.975000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:37.975000 audit: BPF prog-id=172 op=LOAD Jan 15 00:42:37.975000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3345 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:37.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434373734396630356264316233333131633265386564636532313265 Jan 15 00:42:38.325906 containerd[1637]: time="2026-01-15T00:42:38.325628257Z" level=info msg="StartContainer for \"447749f05bd1b3311c2e8edce212e5cae17b19e69961aaea8a9340385059dafe\" returns successfully" Jan 15 00:42:38.830208 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 00:42:38.830330 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 00:42:39.054276 kubelet[2808]: E0115 00:42:39.053686 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:39.147047 kubelet[2808]: I0115 00:42:39.145319 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s9mwn" podStartSLOduration=3.153246146 podStartE2EDuration="44.145297568s" podCreationTimestamp="2026-01-15 00:41:55 +0000 UTC" firstStartedPulling="2026-01-15 00:41:56.412324642 +0000 UTC m=+28.743360045" lastFinishedPulling="2026-01-15 00:42:37.404376053 +0000 UTC m=+69.735411467" observedRunningTime="2026-01-15 00:42:39.139700434 +0000 UTC m=+71.470735878" watchObservedRunningTime="2026-01-15 00:42:39.145297568 +0000 UTC m=+71.476332982" Jan 15 00:42:39.708664 kubelet[2808]: I0115 00:42:39.707708 2808 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfkmj\" (UniqueName: \"kubernetes.io/projected/89945528-1586-4a50-85f4-e1cc83f6a348-kube-api-access-cfkmj\") pod \"89945528-1586-4a50-85f4-e1cc83f6a348\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " Jan 15 00:42:39.708664 kubelet[2808]: I0115 00:42:39.707980 2808 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-ca-bundle\") pod \"89945528-1586-4a50-85f4-e1cc83f6a348\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " Jan 15 00:42:39.708664 kubelet[2808]: I0115 00:42:39.708027 2808 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-backend-key-pair\") pod \"89945528-1586-4a50-85f4-e1cc83f6a348\" (UID: \"89945528-1586-4a50-85f4-e1cc83f6a348\") " Jan 15 00:42:39.711010 kubelet[2808]: I0115 00:42:39.710582 2808 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "89945528-1586-4a50-85f4-e1cc83f6a348" (UID: "89945528-1586-4a50-85f4-e1cc83f6a348"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 00:42:39.739688 kubelet[2808]: I0115 00:42:39.739170 2808 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "89945528-1586-4a50-85f4-e1cc83f6a348" (UID: "89945528-1586-4a50-85f4-e1cc83f6a348"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 00:42:39.741121 systemd[1]: var-lib-kubelet-pods-89945528\x2d1586\x2d4a50\x2d85f4\x2de1cc83f6a348-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 00:42:39.752151 systemd[1]: var-lib-kubelet-pods-89945528\x2d1586\x2d4a50\x2d85f4\x2de1cc83f6a348-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcfkmj.mount: Deactivated successfully. Jan 15 00:42:39.754588 kubelet[2808]: I0115 00:42:39.754331 2808 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89945528-1586-4a50-85f4-e1cc83f6a348-kube-api-access-cfkmj" (OuterVolumeSpecName: "kube-api-access-cfkmj") pod "89945528-1586-4a50-85f4-e1cc83f6a348" (UID: "89945528-1586-4a50-85f4-e1cc83f6a348"). InnerVolumeSpecName "kube-api-access-cfkmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 00:42:39.809046 kubelet[2808]: I0115 00:42:39.808971 2808 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 15 00:42:39.809046 kubelet[2808]: I0115 00:42:39.809008 2808 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfkmj\" (UniqueName: \"kubernetes.io/projected/89945528-1586-4a50-85f4-e1cc83f6a348-kube-api-access-cfkmj\") on node \"localhost\" DevicePath \"\"" Jan 15 00:42:39.809046 kubelet[2808]: I0115 00:42:39.809020 2808 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89945528-1586-4a50-85f4-e1cc83f6a348-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 15 00:42:40.040927 kubelet[2808]: E0115 00:42:40.040644 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:40.044036 kubelet[2808]: E0115 00:42:40.043695 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:40.047626 containerd[1637]: time="2026-01-15T00:42:40.047210013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:40.063708 kubelet[2808]: E0115 00:42:40.062099 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:40.113551 systemd[1]: Removed slice kubepods-besteffort-pod89945528_1586_4a50_85f4_e1cc83f6a348.slice - libcontainer container kubepods-besteffort-pod89945528_1586_4a50_85f4_e1cc83f6a348.slice. Jan 15 00:42:40.531930 kubelet[2808]: I0115 00:42:40.531702 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9-whisker-ca-bundle\") pod \"whisker-647ddb774d-q28b2\" (UID: \"2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9\") " pod="calico-system/whisker-647ddb774d-q28b2" Jan 15 00:42:40.533316 kubelet[2808]: I0115 00:42:40.532119 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl56x\" (UniqueName: \"kubernetes.io/projected/2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9-kube-api-access-fl56x\") pod \"whisker-647ddb774d-q28b2\" (UID: \"2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9\") " pod="calico-system/whisker-647ddb774d-q28b2" Jan 15 00:42:40.537053 systemd[1]: Created slice kubepods-besteffort-pod2bb8dcf6_c0ad_46fb_ba76_7aaeab87e7a9.slice - libcontainer container kubepods-besteffort-pod2bb8dcf6_c0ad_46fb_ba76_7aaeab87e7a9.slice. Jan 15 00:42:40.537278 kubelet[2808]: I0115 00:42:40.537107 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9-whisker-backend-key-pair\") pod \"whisker-647ddb774d-q28b2\" (UID: \"2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9\") " pod="calico-system/whisker-647ddb774d-q28b2" Jan 15 00:42:40.876307 containerd[1637]: time="2026-01-15T00:42:40.876146043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647ddb774d-q28b2,Uid:2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:41.049928 kubelet[2808]: E0115 00:42:41.049181 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:41.052181 containerd[1637]: time="2026-01-15T00:42:41.050703981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:41.055136 containerd[1637]: time="2026-01-15T00:42:41.054506321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:41.798639 systemd-networkd[1536]: calib1af24a067a: Link UP Jan 15 00:42:41.804193 systemd-networkd[1536]: calib1af24a067a: Gained carrier Jan 15 00:42:41.887925 containerd[1637]: 2026-01-15 00:42:40.263 [INFO][4308] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:42:41.887925 containerd[1637]: 2026-01-15 00:42:40.419 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q7lg4-eth0 csi-node-driver- calico-system 5de5824a-09ed-431d-8ba6-dbc85139b40f 778 0 2026-01-15 00:41:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q7lg4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib1af24a067a [] [] }} ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-" Jan 15 00:42:41.887925 containerd[1637]: 2026-01-15 00:42:40.420 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.887925 containerd[1637]: 2026-01-15 00:42:41.377 [INFO][4347] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" HandleID="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Workload="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.383 [INFO][4347] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" HandleID="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Workload="localhost-k8s-csi--node--driver--q7lg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011d210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q7lg4", "timestamp":"2026-01-15 00:42:41.377143604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.383 [INFO][4347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.383 [INFO][4347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.384 [INFO][4347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.436 [INFO][4347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" host="localhost" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.540 [INFO][4347] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.574 [INFO][4347] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.583 [INFO][4347] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.604 [INFO][4347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:41.888660 containerd[1637]: 2026-01-15 00:42:41.605 [INFO][4347] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" host="localhost" Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.618 [INFO][4347] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.648 [INFO][4347] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" host="localhost" Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.677 [INFO][4347] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" host="localhost" Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.678 [INFO][4347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" host="localhost" Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.679 [INFO][4347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:41.891691 containerd[1637]: 2026-01-15 00:42:41.679 [INFO][4347] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" HandleID="k8s-pod-network.e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Workload="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.892665 containerd[1637]: 2026-01-15 00:42:41.713 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7lg4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5de5824a-09ed-431d-8ba6-dbc85139b40f", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q7lg4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1af24a067a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:41.893131 containerd[1637]: 2026-01-15 00:42:41.715 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.893131 containerd[1637]: 2026-01-15 00:42:41.715 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1af24a067a ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.893131 containerd[1637]: 2026-01-15 00:42:41.807 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.893249 containerd[1637]: 2026-01-15 00:42:41.809 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7lg4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5de5824a-09ed-431d-8ba6-dbc85139b40f", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb", Pod:"csi-node-driver-q7lg4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1af24a067a", MAC:"76:55:2b:5d:68:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:41.893622 containerd[1637]: 2026-01-15 00:42:41.873 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" Namespace="calico-system" Pod="csi-node-driver-q7lg4" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7lg4-eth0" Jan 15 00:42:41.996105 systemd-networkd[1536]: calif7ac3f4a99c: Link UP Jan 15 00:42:41.998161 systemd-networkd[1536]: calif7ac3f4a99c: Gained carrier Jan 15 00:42:42.046362 containerd[1637]: time="2026-01-15T00:42:42.045325149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,}" Jan 15 00:42:42.089359 kubelet[2808]: I0115 00:42:42.085041 2808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89945528-1586-4a50-85f4-e1cc83f6a348" path="/var/lib/kubelet/pods/89945528-1586-4a50-85f4-e1cc83f6a348/volumes" Jan 15 00:42:42.140190 containerd[1637]: 2026-01-15 00:42:41.075 [INFO][4359] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:42:42.140190 containerd[1637]: 2026-01-15 00:42:41.177 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--647ddb774d--q28b2-eth0 whisker-647ddb774d- calico-system 2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9 1030 0 2026-01-15 00:42:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:647ddb774d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-647ddb774d-q28b2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif7ac3f4a99c [] [] }} ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-" Jan 15 00:42:42.140190 containerd[1637]: 2026-01-15 00:42:41.177 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.140190 containerd[1637]: 2026-01-15 00:42:41.408 [INFO][4411] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" HandleID="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Workload="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.409 [INFO][4411] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" HandleID="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Workload="localhost-k8s-whisker--647ddb774d--q28b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-647ddb774d-q28b2", "timestamp":"2026-01-15 00:42:41.408621392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.409 [INFO][4411] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.679 [INFO][4411] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.680 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.733 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" host="localhost" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.771 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.813 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.830 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.878 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:42.141581 containerd[1637]: 2026-01-15 00:42:41.882 [INFO][4411] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" host="localhost" Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.908 [INFO][4411] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.938 [INFO][4411] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" host="localhost" Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.959 [INFO][4411] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" host="localhost" Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.961 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" host="localhost" Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.962 [INFO][4411] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:42.144174 containerd[1637]: 2026-01-15 00:42:41.963 [INFO][4411] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" HandleID="k8s-pod-network.6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Workload="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.144579 containerd[1637]: 2026-01-15 00:42:41.976 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--647ddb774d--q28b2-eth0", GenerateName:"whisker-647ddb774d-", Namespace:"calico-system", SelfLink:"", UID:"2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"647ddb774d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-647ddb774d-q28b2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7ac3f4a99c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:42.144579 containerd[1637]: 2026-01-15 00:42:41.976 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.149112 containerd[1637]: 2026-01-15 00:42:41.976 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7ac3f4a99c ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.149112 containerd[1637]: 2026-01-15 00:42:42.005 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.150362 containerd[1637]: 2026-01-15 00:42:42.023 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--647ddb774d--q28b2-eth0", GenerateName:"whisker-647ddb774d-", Namespace:"calico-system", SelfLink:"", UID:"2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"647ddb774d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f", Pod:"whisker-647ddb774d-q28b2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7ac3f4a99c", MAC:"16:93:9f:d4:a4:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:42.151551 containerd[1637]: 2026-01-15 00:42:42.104 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" Namespace="calico-system" Pod="whisker-647ddb774d-q28b2" WorkloadEndpoint="localhost-k8s-whisker--647ddb774d--q28b2-eth0" Jan 15 00:42:42.465164 systemd-networkd[1536]: cali6b48115ac1d: Link UP Jan 15 00:42:42.478051 systemd-networkd[1536]: cali6b48115ac1d: Gained carrier Jan 15 00:42:42.693672 containerd[1637]: 2026-01-15 00:42:41.253 [INFO][4385] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:42:42.693672 containerd[1637]: 2026-01-15 00:42:41.350 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0 calico-apiserver-5c9bd68d8f- calico-apiserver e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88 906 0 2026-01-15 00:41:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9bd68d8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c9bd68d8f-cmczc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b48115ac1d [] [] }} ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-" Jan 15 00:42:42.693672 containerd[1637]: 2026-01-15 00:42:41.350 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.693672 containerd[1637]: 2026-01-15 00:42:41.595 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" HandleID="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:41.604 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" HandleID="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a00b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c9bd68d8f-cmczc", "timestamp":"2026-01-15 00:42:41.595257303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:41.604 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:41.963 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:41.967 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.011 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" host="localhost" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.098 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.180 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.219 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.252 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:42.695257 containerd[1637]: 2026-01-15 00:42:42.257 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" host="localhost" Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.311 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.338 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" host="localhost" Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.402 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" host="localhost" Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.402 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" host="localhost" Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.406 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:42.696323 containerd[1637]: 2026-01-15 00:42:42.406 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" HandleID="k8s-pod-network.1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.696653 containerd[1637]: 2026-01-15 00:42:42.425 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0", GenerateName:"calico-apiserver-5c9bd68d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd68d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c9bd68d8f-cmczc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b48115ac1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:42.699188 containerd[1637]: 2026-01-15 00:42:42.426 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.699188 containerd[1637]: 2026-01-15 00:42:42.427 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b48115ac1d ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.699188 containerd[1637]: 2026-01-15 00:42:42.480 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.699312 containerd[1637]: 2026-01-15 00:42:42.488 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0", GenerateName:"calico-apiserver-5c9bd68d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd68d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c", Pod:"calico-apiserver-5c9bd68d8f-cmczc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b48115ac1d", MAC:"9e:60:2a:2b:a4:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:42.699710 containerd[1637]: 2026-01-15 00:42:42.589 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-cmczc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--cmczc-eth0" Jan 15 00:42:42.881926 containerd[1637]: time="2026-01-15T00:42:42.881217931Z" level=info msg="connecting to shim 6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f" address="unix:///run/containerd/s/593cbd16e1d7d595fa289390289d8437ee2e3d50a82face39083622d64636674" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:42.988359 systemd-networkd[1536]: calib1af24a067a: Gained IPv6LL Jan 15 00:42:43.018614 containerd[1637]: time="2026-01-15T00:42:43.016186471Z" level=info msg="connecting to shim e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb" address="unix:///run/containerd/s/931356b8c6f38a3bcd64dc4185694b50debe1c3b7b34a81beffee0ffcc05d67b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:43.201227 systemd-networkd[1536]: calif51e4034c52: Link UP Jan 15 00:42:43.237526 systemd-networkd[1536]: calif51e4034c52: Gained carrier Jan 15 00:42:43.251153 systemd-networkd[1536]: calif7ac3f4a99c: Gained IPv6LL Jan 15 00:42:43.335315 containerd[1637]: time="2026-01-15T00:42:43.335030090Z" level=info msg="connecting to shim 1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c" address="unix:///run/containerd/s/fd6223789ed224b59e69c0becbf0cfab5ea89c7a50ed7205f0ef6daea1d7d910" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:43.457348 systemd[1]: Started cri-containerd-6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f.scope - libcontainer container 6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f. Jan 15 00:42:43.473104 containerd[1637]: 2026-01-15 00:42:41.425 [INFO][4399] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:42:43.473104 containerd[1637]: 2026-01-15 00:42:41.502 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--j5chh-eth0 coredns-668d6bf9bc- kube-system 7d55d038-4c75-4fb7-a8e2-55a6787772a7 902 0 2026-01-15 00:41:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-j5chh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif51e4034c52 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-" Jan 15 00:42:43.473104 containerd[1637]: 2026-01-15 00:42:41.502 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.473104 containerd[1637]: 2026-01-15 00:42:41.723 [INFO][4431] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" HandleID="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Workload="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:41.724 [INFO][4431] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" HandleID="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Workload="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3be0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-j5chh", "timestamp":"2026-01-15 00:42:41.723321997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:41.725 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.402 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.406 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.466 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" host="localhost" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.564 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.706 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.750 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.821 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:43.474097 containerd[1637]: 2026-01-15 00:42:42.821 [INFO][4431] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" host="localhost" Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:42.833 [INFO][4431] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:42.897 [INFO][4431] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" host="localhost" Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:43.009 [INFO][4431] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" host="localhost" Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:43.010 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" host="localhost" Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:43.010 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:43.474966 containerd[1637]: 2026-01-15 00:42:43.010 [INFO][4431] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" HandleID="k8s-pod-network.e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Workload="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.475161 containerd[1637]: 2026-01-15 00:42:43.126 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j5chh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d55d038-4c75-4fb7-a8e2-55a6787772a7", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-j5chh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif51e4034c52", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:43.475565 containerd[1637]: 2026-01-15 00:42:43.126 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.475565 containerd[1637]: 2026-01-15 00:42:43.129 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif51e4034c52 ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.475565 containerd[1637]: 2026-01-15 00:42:43.301 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.475686 containerd[1637]: 2026-01-15 00:42:43.321 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j5chh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d55d038-4c75-4fb7-a8e2-55a6787772a7", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d", Pod:"coredns-668d6bf9bc-j5chh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif51e4034c52", MAC:"4e:39:00:43:35:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:43.475686 containerd[1637]: 2026-01-15 00:42:43.403 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" Namespace="kube-system" Pod="coredns-668d6bf9bc-j5chh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j5chh-eth0" Jan 15 00:42:43.534320 systemd[1]: Started cri-containerd-e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb.scope - libcontainer container e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb. Jan 15 00:42:43.676000 audit: BPF prog-id=173 op=LOAD Jan 15 00:42:43.691149 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 15 00:42:43.691254 kernel: audit: type=1334 audit(1768437763.676:573): prog-id=173 op=LOAD Jan 15 00:42:43.709600 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:43.729100 kernel: audit: type=1334 audit(1768437763.681:574): prog-id=174 op=LOAD Jan 15 00:42:43.681000 audit: BPF prog-id=174 op=LOAD Jan 15 00:42:43.681000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.775137 systemd-networkd[1536]: cali6b48115ac1d: Gained IPv6LL Jan 15 00:42:43.790566 kernel: audit: type=1300 audit(1768437763.681:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.791931 systemd[1]: Started cri-containerd-1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c.scope - libcontainer container 1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c. Jan 15 00:42:43.800023 containerd[1637]: time="2026-01-15T00:42:43.798627309Z" level=info msg="connecting to shim e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d" address="unix:///run/containerd/s/5a6c3cc81cbc3e8f9ddbb528d98592bbddd39eee94990a41057035ee3166385c" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:43.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.866693 kernel: audit: type=1327 audit(1768437763.681:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.867098 kernel: audit: type=1334 audit(1768437763.681:575): prog-id=174 op=UNLOAD Jan 15 00:42:43.681000 audit: BPF prog-id=174 op=UNLOAD Jan 15 00:42:43.681000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.916533 kernel: audit: type=1300 audit(1768437763.681:575): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.917980 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:43.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.964037 kernel: audit: type=1327 audit(1768437763.681:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: BPF prog-id=175 op=LOAD Jan 15 00:42:43.683000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.025358 kernel: audit: type=1334 audit(1768437763.683:576): prog-id=175 op=LOAD Jan 15 00:42:44.025621 kernel: audit: type=1300 audit(1768437763.683:576): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.025650 containerd[1637]: time="2026-01-15T00:42:44.011613710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7lg4,Uid:5de5824a-09ed-431d-8ba6-dbc85139b40f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e38a70c0cf42ee00a882d4991b99745276d5375750c3e8fc36854262f1f5b9bb\"" Jan 15 00:42:44.087221 kernel: audit: type=1327 audit(1768437763.683:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: BPF prog-id=176 op=LOAD Jan 15 00:42:43.683000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: BPF prog-id=176 op=UNLOAD Jan 15 00:42:43.683000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: BPF prog-id=175 op=UNLOAD Jan 15 00:42:43.683000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.683000 audit: BPF prog-id=177 op=LOAD Jan 15 00:42:43.683000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4596 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533386137306330636634326565303061383832643439393162393937 Jan 15 00:42:43.889000 audit: BPF prog-id=178 op=LOAD Jan 15 00:42:43.905000 audit: BPF prog-id=179 op=LOAD Jan 15 00:42:43.905000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.905000 audit: BPF prog-id=179 op=UNLOAD Jan 15 00:42:43.905000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.906000 audit: BPF prog-id=180 op=LOAD Jan 15 00:42:43.906000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.906000 audit: BPF prog-id=181 op=LOAD Jan 15 00:42:43.906000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.906000 audit: BPF prog-id=181 op=UNLOAD Jan 15 00:42:43.906000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.906000 audit: BPF prog-id=180 op=UNLOAD Jan 15 00:42:43.906000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:43.906000 audit: BPF prog-id=182 op=LOAD Jan 15 00:42:43.906000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4588 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:43.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662313661343337373430356439633936343237396361393038373532 Jan 15 00:42:44.119021 kubelet[2808]: E0115 00:42:44.105529 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:44.125081 containerd[1637]: time="2026-01-15T00:42:44.120151443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:44.125081 containerd[1637]: time="2026-01-15T00:42:44.120162142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,}" Jan 15 00:42:44.125081 containerd[1637]: time="2026-01-15T00:42:44.122324083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:42:44.153000 audit: BPF prog-id=183 op=LOAD Jan 15 00:42:44.158000 audit: BPF prog-id=184 op=LOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=184 op=UNLOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=185 op=LOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=186 op=LOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=186 op=UNLOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=185 op=UNLOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.158000 audit: BPF prog-id=187 op=LOAD Jan 15 00:42:44.158000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4638 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165376261353036313233333566643031363430613137383233373763 Jan 15 00:42:44.202968 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:44.320545 systemd[1]: Started cri-containerd-e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d.scope - libcontainer container e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d. Jan 15 00:42:44.387258 containerd[1637]: time="2026-01-15T00:42:44.386655984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:44.413998 containerd[1637]: time="2026-01-15T00:42:44.413343433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:42:44.413998 containerd[1637]: time="2026-01-15T00:42:44.413570186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:44.415927 kubelet[2808]: E0115 00:42:44.414219 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:42:44.415927 kubelet[2808]: E0115 00:42:44.415661 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:42:44.423871 kubelet[2808]: E0115 00:42:44.423163 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:44.439049 containerd[1637]: time="2026-01-15T00:42:44.437613191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:42:44.468000 audit: BPF prog-id=188 op=LOAD Jan 15 00:42:44.492000 audit: BPF prog-id=189 op=LOAD Jan 15 00:42:44.492000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.493000 audit: BPF prog-id=189 op=UNLOAD Jan 15 00:42:44.493000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.496000 audit: BPF prog-id=190 op=LOAD Jan 15 00:42:44.496000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.500333 systemd-networkd[1536]: cali599c710baca: Link UP Jan 15 00:42:44.498000 audit: BPF prog-id=191 op=LOAD Jan 15 00:42:44.498000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.505957 systemd-networkd[1536]: cali599c710baca: Gained carrier Jan 15 00:42:44.507000 audit: BPF prog-id=191 op=UNLOAD Jan 15 00:42:44.507000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.507000 audit: BPF prog-id=190 op=UNLOAD Jan 15 00:42:44.507000 audit[4763]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.507000 audit: BPF prog-id=192 op=LOAD Jan 15 00:42:44.507000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4716 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663532353435613664656634643130643434663361336135666366 Jan 15 00:42:44.516997 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:44.584145 containerd[1637]: time="2026-01-15T00:42:44.583330631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-cmczc,Uid:e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e7ba50612335fd01640a1782377c3707499daab3657916c45dc7c9b8d27dc6c\"" Jan 15 00:42:44.596950 containerd[1637]: time="2026-01-15T00:42:44.594564704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:44.631522 containerd[1637]: time="2026-01-15T00:42:44.627704862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:42:44.631522 containerd[1637]: time="2026-01-15T00:42:44.628164008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:44.632060 kubelet[2808]: E0115 00:42:44.629312 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:42:44.632060 kubelet[2808]: E0115 00:42:44.629491 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:42:44.632060 kubelet[2808]: E0115 00:42:44.629976 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:44.636981 kubelet[2808]: E0115 00:42:44.635932 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:44.637295 containerd[1637]: time="2026-01-15T00:42:44.632146784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:42.576 [INFO][4470] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:42.891 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0 calico-apiserver-5c9bd68d8f- calico-apiserver 2f1337fc-8e0d-4906-b1f0-90d0896b3f07 904 0 2026-01-15 00:41:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9bd68d8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c9bd68d8f-tstqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali599c710baca [] [] }} ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:42.967 [INFO][4470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.802 [INFO][4611] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" HandleID="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.805 [INFO][4611] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" HandleID="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fdb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c9bd68d8f-tstqw", "timestamp":"2026-01-15 00:42:43.80205866 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.805 [INFO][4611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.805 [INFO][4611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.805 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.860 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:43.966 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.208 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.254 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.315 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.315 [INFO][4611] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.332 [INFO][4611] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7 Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.359 [INFO][4611] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.435 [INFO][4611] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.435 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" host="localhost" Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.435 [INFO][4611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:44.681306 containerd[1637]: 2026-01-15 00:42:44.435 [INFO][4611] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" HandleID="k8s-pod-network.9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Workload="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.460 [INFO][4470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0", GenerateName:"calico-apiserver-5c9bd68d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f1337fc-8e0d-4906-b1f0-90d0896b3f07", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd68d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c9bd68d8f-tstqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali599c710baca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.460 [INFO][4470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.461 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali599c710baca ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.524 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.533 [INFO][4470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0", GenerateName:"calico-apiserver-5c9bd68d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f1337fc-8e0d-4906-b1f0-90d0896b3f07", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd68d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7", Pod:"calico-apiserver-5c9bd68d8f-tstqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali599c710baca", MAC:"12:a2:87:74:07:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:44.683997 containerd[1637]: 2026-01-15 00:42:44.641 [INFO][4470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" Namespace="calico-apiserver" Pod="calico-apiserver-5c9bd68d8f-tstqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9bd68d8f--tstqw-eth0" Jan 15 00:42:44.699469 containerd[1637]: time="2026-01-15T00:42:44.698691875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647ddb774d-q28b2,Uid:2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b16a4377405d9c964279ca90875296cb5c49ee80b300c84817098d9939d182f\"" Jan 15 00:42:44.734915 containerd[1637]: time="2026-01-15T00:42:44.730956442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:44.738668 containerd[1637]: time="2026-01-15T00:42:44.738137662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:42:44.738668 containerd[1637]: time="2026-01-15T00:42:44.738335311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:44.739031 kubelet[2808]: E0115 00:42:44.738573 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:44.739031 kubelet[2808]: E0115 00:42:44.738615 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:44.739164 kubelet[2808]: E0115 00:42:44.739058 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:44.743056 kubelet[2808]: E0115 00:42:44.740507 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:42:44.746310 containerd[1637]: time="2026-01-15T00:42:44.746019638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:42:44.845255 containerd[1637]: time="2026-01-15T00:42:44.839118262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j5chh,Uid:7d55d038-4c75-4fb7-a8e2-55a6787772a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d\"" Jan 15 00:42:44.851986 kubelet[2808]: E0115 00:42:44.851500 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:44.877189 containerd[1637]: time="2026-01-15T00:42:44.876616010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:44.888920 containerd[1637]: time="2026-01-15T00:42:44.888558728Z" level=info msg="CreateContainer within sandbox \"e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:42:44.912094 containerd[1637]: time="2026-01-15T00:42:44.912039492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:42:44.913663 containerd[1637]: time="2026-01-15T00:42:44.912353648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:44.918910 kubelet[2808]: E0115 00:42:44.917272 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:42:44.918910 kubelet[2808]: E0115 00:42:44.917538 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:42:44.918910 kubelet[2808]: E0115 00:42:44.917647 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:924b8d98b6e94388a97563446e348ded,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:44.939553 containerd[1637]: time="2026-01-15T00:42:44.939090378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:42:44.941000 audit: BPF prog-id=193 op=LOAD Jan 15 00:42:44.941000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48e85c60 a2=98 a3=1fffffffffffffff items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.941000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.946000 audit: BPF prog-id=193 op=UNLOAD Jan 15 00:42:44.946000 audit[4847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc48e85c30 a3=0 items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.946000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.960000 audit: BPF prog-id=194 op=LOAD Jan 15 00:42:44.960000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48e85b40 a2=94 a3=3 items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.961000 audit: BPF prog-id=194 op=UNLOAD Jan 15 00:42:44.961000 audit[4847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc48e85b40 a2=94 a3=3 items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.961000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.962000 audit: BPF prog-id=195 op=LOAD Jan 15 00:42:44.962000 audit[4847]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48e85b80 a2=94 a3=7ffc48e85d60 items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.962000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.962000 audit: BPF prog-id=195 op=UNLOAD Jan 15 00:42:44.962000 audit[4847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc48e85b80 a2=94 a3=7ffc48e85d60 items=0 ppid=4496 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.962000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 00:42:44.975158 systemd-networkd[1536]: calif51e4034c52: Gained IPv6LL Jan 15 00:42:44.985000 audit: BPF prog-id=196 op=LOAD Jan 15 00:42:44.985000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3c814570 a2=98 a3=3 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.985000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:44.985000 audit: BPF prog-id=196 op=UNLOAD Jan 15 00:42:44.985000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe3c814540 a3=0 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.985000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:44.989000 audit: BPF prog-id=197 op=LOAD Jan 15 00:42:44.989000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c814360 a2=94 a3=54428f items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.989000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:44.989000 audit: BPF prog-id=197 op=UNLOAD Jan 15 00:42:44.989000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c814360 a2=94 a3=54428f items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.989000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:44.990000 audit: BPF prog-id=198 op=LOAD Jan 15 00:42:44.990000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c814390 a2=94 a3=2 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:44.990000 audit: BPF prog-id=198 op=UNLOAD Jan 15 00:42:44.990000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c814390 a2=0 a3=2 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:44.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:45.041994 containerd[1637]: time="2026-01-15T00:42:45.041353562Z" level=info msg="Container 8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:42:45.051123 containerd[1637]: time="2026-01-15T00:42:45.051090889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:45.060227 containerd[1637]: time="2026-01-15T00:42:45.060191447Z" level=info msg="connecting to shim 9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7" address="unix:///run/containerd/s/47cbb9110e1488f329a0a5423fcd34435e45a73acc7370d92a6ec3a66303cb73" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:45.064659 containerd[1637]: time="2026-01-15T00:42:45.064506412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:42:45.068014 containerd[1637]: time="2026-01-15T00:42:45.065043333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:45.074943 kubelet[2808]: E0115 00:42:45.073344 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:42:45.075102 kubelet[2808]: E0115 00:42:45.075072 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:42:45.078911 kubelet[2808]: E0115 00:42:45.077511 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:45.081331 containerd[1637]: time="2026-01-15T00:42:45.081155335Z" level=info msg="CreateContainer within sandbox \"e6f52545a6def4d10d44f3a3a5fcff0204912d02a5ec089038a4e344ea1d783d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7\"" Jan 15 00:42:45.095979 kubelet[2808]: E0115 00:42:45.085131 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:42:45.110923 containerd[1637]: time="2026-01-15T00:42:45.110355027Z" level=info msg="StartContainer for \"8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7\"" Jan 15 00:42:45.137631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369921051.mount: Deactivated successfully. Jan 15 00:42:45.142643 containerd[1637]: time="2026-01-15T00:42:45.141136594Z" level=info msg="connecting to shim 8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7" address="unix:///run/containerd/s/5a6c3cc81cbc3e8f9ddbb528d98592bbddd39eee94990a41057035ee3166385c" protocol=ttrpc version=3 Jan 15 00:42:45.257259 kubelet[2808]: E0115 00:42:45.256199 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:42:45.269978 kubelet[2808]: E0115 00:42:45.269938 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:42:45.283067 kubelet[2808]: E0115 00:42:45.281262 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:45.358181 systemd[1]: Started cri-containerd-8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7.scope - libcontainer container 8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7. Jan 15 00:42:45.437069 systemd[1]: Started cri-containerd-9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7.scope - libcontainer container 9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7. Jan 15 00:42:45.574000 audit: BPF prog-id=199 op=LOAD Jan 15 00:42:45.579000 audit: BPF prog-id=200 op=LOAD Jan 15 00:42:45.579000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.579000 audit: BPF prog-id=200 op=UNLOAD Jan 15 00:42:45.579000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.581000 audit: BPF prog-id=201 op=LOAD Jan 15 00:42:45.581000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.583000 audit: BPF prog-id=202 op=LOAD Jan 15 00:42:45.583000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.583000 audit: BPF prog-id=202 op=UNLOAD Jan 15 00:42:45.583000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.583000 audit: BPF prog-id=201 op=UNLOAD Jan 15 00:42:45.583000 audit[4877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.583000 audit: BPF prog-id=203 op=LOAD Jan 15 00:42:45.583000 audit[4877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4716 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862323938373666373433663564396334663062363530383763376163 Jan 15 00:42:45.677960 systemd-networkd[1536]: cali599c710baca: Gained IPv6LL Jan 15 00:42:45.742000 audit: BPF prog-id=204 op=LOAD Jan 15 00:42:45.751000 audit: BPF prog-id=205 op=LOAD Jan 15 00:42:45.751000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6238 a2=98 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.752000 audit: BPF prog-id=205 op=UNLOAD Jan 15 00:42:45.752000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.757000 audit: BPF prog-id=206 op=LOAD Jan 15 00:42:45.757000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6488 a2=98 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.758000 audit: BPF prog-id=207 op=LOAD Jan 15 00:42:45.758000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001d6218 a2=98 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.758000 audit: BPF prog-id=207 op=UNLOAD Jan 15 00:42:45.758000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.758000 audit: BPF prog-id=206 op=UNLOAD Jan 15 00:42:45.758000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.758000 audit: BPF prog-id=208 op=LOAD Jan 15 00:42:45.758000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d66e8 a2=98 a3=0 items=0 ppid=4857 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964356639363136616261336338393236373931326633636662303265 Jan 15 00:42:45.762000 audit[4928]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:45.762000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd89345900 a2=0 a3=7ffd893458ec items=0 ppid=2968 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:45.765201 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:45.770000 audit[4928]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:45.770000 audit[4928]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd89345900 a2=0 a3=0 items=0 ppid=2968 pid=4928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:45.834910 containerd[1637]: time="2026-01-15T00:42:45.834087053Z" level=info msg="StartContainer for \"8b29876f743f5d9c4f0b65087c7aced00e07a5da55cdbe3e570c10428991a6f7\" returns successfully" Jan 15 00:42:45.843000 audit[4942]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:45.843000 audit[4942]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc40c6c2c0 a2=0 a3=7ffc40c6c2ac items=0 ppid=2968 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:45.862000 audit[4942]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:45.862000 audit[4942]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc40c6c2c0 a2=0 a3=0 items=0 ppid=2968 pid=4942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:45.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:46.001486 containerd[1637]: time="2026-01-15T00:42:45.996481116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd68d8f-tstqw,Uid:2f1337fc-8e0d-4906-b1f0-90d0896b3f07,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9d5f9616aba3c89267912f3cfb02ebb666a0837b65e01c37b3bcd33bcf91adc7\"" Jan 15 00:42:46.007164 containerd[1637]: time="2026-01-15T00:42:46.006101737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:42:46.100655 containerd[1637]: time="2026-01-15T00:42:46.099282897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:46.107589 containerd[1637]: time="2026-01-15T00:42:46.107226778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:42:46.107589 containerd[1637]: time="2026-01-15T00:42:46.107558968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:46.110518 kubelet[2808]: E0115 00:42:46.110340 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:46.110643 kubelet[2808]: E0115 00:42:46.110617 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:46.111180 kubelet[2808]: E0115 00:42:46.111135 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znh8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:46.115053 kubelet[2808]: E0115 00:42:46.115020 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:42:46.129137 systemd-networkd[1536]: calib22ffffd989: Link UP Jan 15 00:42:46.151497 systemd-networkd[1536]: calib22ffffd989: Gained carrier Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.202 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9ztx8-eth0 goldmane-666569f655- calico-system 1429bbd4-fe5b-4951-85dc-5a892fa35b68 898 0 2026-01-15 00:41:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9ztx8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib22ffffd989 [] [] }} ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.202 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.674 [INFO][4899] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" HandleID="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Workload="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.704 [INFO][4899] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" HandleID="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Workload="localhost-k8s-goldmane--666569f655--9ztx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9ztx8", "timestamp":"2026-01-15 00:42:45.674062364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.705 [INFO][4899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.705 [INFO][4899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.705 [INFO][4899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.780 [INFO][4899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.826 [INFO][4899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.877 [INFO][4899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.907 [INFO][4899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.938 [INFO][4899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.940 [INFO][4899] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.955 [INFO][4899] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:45.992 [INFO][4899] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:46.067 [INFO][4899] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:46.068 [INFO][4899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" host="localhost" Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:46.068 [INFO][4899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:46.339968 containerd[1637]: 2026-01-15 00:42:46.068 [INFO][4899] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" HandleID="k8s-pod-network.550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Workload="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.351000 audit: BPF prog-id=209 op=LOAD Jan 15 00:42:46.351000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c814250 a2=94 a3=1 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.351000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.351000 audit: BPF prog-id=209 op=UNLOAD Jan 15 00:42:46.351000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c814250 a2=94 a3=1 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.351000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.078 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9ztx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1429bbd4-fe5b-4951-85dc-5a892fa35b68", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9ztx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib22ffffd989", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.078 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.078 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib22ffffd989 ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.157 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.179 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9ztx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1429bbd4-fe5b-4951-85dc-5a892fa35b68", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b", Pod:"goldmane-666569f655-9ztx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib22ffffd989", MAC:"a2:33:ff:6e:aa:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:46.361283 containerd[1637]: 2026-01-15 00:42:46.295 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" Namespace="calico-system" Pod="goldmane-666569f655-9ztx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9ztx8-eth0" Jan 15 00:42:46.362545 kubelet[2808]: E0115 00:42:46.360326 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:46.375000 audit: BPF prog-id=210 op=LOAD Jan 15 00:42:46.375000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c814240 a2=94 a3=4 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.375000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.376000 audit: BPF prog-id=210 op=UNLOAD Jan 15 00:42:46.376000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3c814240 a2=0 a3=4 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.376000 audit: BPF prog-id=211 op=LOAD Jan 15 00:42:46.376000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3c8140a0 a2=94 a3=5 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.376000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.377000 audit: BPF prog-id=211 op=UNLOAD Jan 15 00:42:46.377000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3c8140a0 a2=0 a3=5 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.377000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.377000 audit: BPF prog-id=212 op=LOAD Jan 15 00:42:46.377000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c8142c0 a2=94 a3=6 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.377000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.377000 audit: BPF prog-id=212 op=UNLOAD Jan 15 00:42:46.377000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3c8142c0 a2=0 a3=6 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.377000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.378000 audit: BPF prog-id=213 op=LOAD Jan 15 00:42:46.378000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c813a70 a2=94 a3=88 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.378000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.378000 audit: BPF prog-id=214 op=LOAD Jan 15 00:42:46.378000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe3c8138f0 a2=94 a3=2 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.378000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.378000 audit: BPF prog-id=214 op=UNLOAD Jan 15 00:42:46.378000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe3c813920 a2=0 a3=7ffe3c813a20 items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.378000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.380000 audit: BPF prog-id=213 op=UNLOAD Jan 15 00:42:46.380000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=9246d10 a2=0 a3=441e47ec7ceedc6f items=0 ppid=4496 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 00:42:46.398175 kubelet[2808]: E0115 00:42:46.398124 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:42:46.399328 kubelet[2808]: E0115 00:42:46.399291 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:42:46.433949 kubelet[2808]: E0115 00:42:46.433262 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:42:46.488000 audit: BPF prog-id=215 op=LOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d767720 a2=98 a3=1999999999999999 items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.488000 audit: BPF prog-id=215 op=UNLOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe2d7676f0 a3=0 items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.488000 audit: BPF prog-id=216 op=LOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d767600 a2=94 a3=ffff items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.488000 audit: BPF prog-id=216 op=UNLOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2d767600 a2=94 a3=ffff items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.488000 audit: BPF prog-id=217 op=LOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d767640 a2=94 a3=7ffe2d767820 items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.488000 audit: BPF prog-id=217 op=UNLOAD Jan 15 00:42:46.488000 audit[4971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2d767640 a2=94 a3=7ffe2d767820 items=0 ppid=4496 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.488000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 00:42:46.533279 containerd[1637]: time="2026-01-15T00:42:46.533102624Z" level=info msg="connecting to shim 550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b" address="unix:///run/containerd/s/b04693cf8438cd23482bc326b9e1a6a5cb5de1709c596116d7bca9c92d0da3f7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:46.587580 kubelet[2808]: I0115 00:42:46.586346 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j5chh" podStartSLOduration=74.586326667 podStartE2EDuration="1m14.586326667s" podCreationTimestamp="2026-01-15 00:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:42:46.473520495 +0000 UTC m=+78.804555919" watchObservedRunningTime="2026-01-15 00:42:46.586326667 +0000 UTC m=+78.917362072" Jan 15 00:42:46.703000 audit[5003]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:46.703000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc373615f0 a2=0 a3=7ffc373615dc items=0 ppid=2968 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:46.722000 audit[5003]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:46.722000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc373615f0 a2=0 a3=0 items=0 ppid=2968 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:46.729273 systemd[1]: Started cri-containerd-550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b.scope - libcontainer container 550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b. Jan 15 00:42:46.755228 systemd-networkd[1536]: cali279976a2c23: Link UP Jan 15 00:42:46.757983 systemd-networkd[1536]: cali279976a2c23: Gained carrier Jan 15 00:42:46.855000 audit: BPF prog-id=218 op=LOAD Jan 15 00:42:46.857000 audit: BPF prog-id=219 op=LOAD Jan 15 00:42:46.857000 audit[4995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.858000 audit: BPF prog-id=219 op=UNLOAD Jan 15 00:42:46.858000 audit[4995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.859000 audit: BPF prog-id=220 op=LOAD Jan 15 00:42:46.859000 audit[4995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.860000 audit: BPF prog-id=221 op=LOAD Jan 15 00:42:46.860000 audit[4995]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.860000 audit: BPF prog-id=221 op=UNLOAD Jan 15 00:42:46.860000 audit[4995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.861000 audit: BPF prog-id=220 op=UNLOAD Jan 15 00:42:46.861000 audit[4995]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.861000 audit: BPF prog-id=222 op=LOAD Jan 15 00:42:46.861000 audit[4995]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4978 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:46.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535306239333865623562643636346465663264623038383032623331 Jan 15 00:42:46.869914 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:45.393 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0 calico-kube-controllers-c6d667669- calico-system 4a17e4fb-67a0-4d0e-b72d-590a1df87758 893 0 2026-01-15 00:41:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6d667669 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6d667669-rqqnw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali279976a2c23 [] [] }} ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:45.409 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:45.829 [INFO][4913] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" HandleID="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Workload="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:45.831 [INFO][4913] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" HandleID="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Workload="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026c5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6d667669-rqqnw", "timestamp":"2026-01-15 00:42:45.829313921 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:45.831 [INFO][4913] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.069 [INFO][4913] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.069 [INFO][4913] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.130 [INFO][4913] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.204 [INFO][4913] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.387 [INFO][4913] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.426 [INFO][4913] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.515 [INFO][4913] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.516 [INFO][4913] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.541 [INFO][4913] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497 Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.581 [INFO][4913] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.641 [INFO][4913] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.641 [INFO][4913] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" host="localhost" Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.641 [INFO][4913] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:46.974059 containerd[1637]: 2026-01-15 00:42:46.641 [INFO][4913] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" HandleID="k8s-pod-network.7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Workload="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.703 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0", GenerateName:"calico-kube-controllers-c6d667669-", Namespace:"calico-system", SelfLink:"", UID:"4a17e4fb-67a0-4d0e-b72d-590a1df87758", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d667669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6d667669-rqqnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali279976a2c23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.703 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.703 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali279976a2c23 ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.762 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.767 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0", GenerateName:"calico-kube-controllers-c6d667669-", Namespace:"calico-system", SelfLink:"", UID:"4a17e4fb-67a0-4d0e-b72d-590a1df87758", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d667669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497", Pod:"calico-kube-controllers-c6d667669-rqqnw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali279976a2c23", MAC:"02:63:6e:09:68:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:46.976214 containerd[1637]: 2026-01-15 00:42:46.953 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" Namespace="calico-system" Pod="calico-kube-controllers-c6d667669-rqqnw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d667669--rqqnw-eth0" Jan 15 00:42:47.180706 containerd[1637]: time="2026-01-15T00:42:47.180329748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9ztx8,Uid:1429bbd4-fe5b-4951-85dc-5a892fa35b68,Namespace:calico-system,Attempt:0,} returns sandbox id \"550b938eb5bd664def2db08802b31747764ece103c515c761513329af8e6731b\"" Jan 15 00:42:47.230293 containerd[1637]: time="2026-01-15T00:42:47.229988257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:42:47.233286 containerd[1637]: time="2026-01-15T00:42:47.233134103Z" level=info msg="connecting to shim 7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497" address="unix:///run/containerd/s/39ccd14e98cd61e3bbe4d682f65fdf37e6b8b961373215433a8ebecfc6977b22" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:47.243695 systemd-networkd[1536]: vxlan.calico: Link UP Jan 15 00:42:47.243706 systemd-networkd[1536]: vxlan.calico: Gained carrier Jan 15 00:42:47.343160 containerd[1637]: time="2026-01-15T00:42:47.343032366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:47.345617 containerd[1637]: time="2026-01-15T00:42:47.345331924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:42:47.348938 containerd[1637]: time="2026-01-15T00:42:47.348064029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:47.350254 kubelet[2808]: E0115 00:42:47.349247 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:42:47.350254 kubelet[2808]: E0115 00:42:47.349319 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:42:47.350254 kubelet[2808]: E0115 00:42:47.349606 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:47.354320 kubelet[2808]: E0115 00:42:47.353705 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:42:47.411622 kubelet[2808]: E0115 00:42:47.411594 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:47.420592 kubelet[2808]: E0115 00:42:47.419128 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:42:47.420592 kubelet[2808]: E0115 00:42:47.419232 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:42:47.508698 systemd[1]: Started cri-containerd-7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497.scope - libcontainer container 7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497. Jan 15 00:42:47.586000 audit: BPF prog-id=223 op=LOAD Jan 15 00:42:47.586000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d8af60 a2=98 a3=0 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.586000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.588000 audit: BPF prog-id=223 op=UNLOAD Jan 15 00:42:47.588000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff81d8af30 a3=0 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.588000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.588000 audit: BPF prog-id=224 op=LOAD Jan 15 00:42:47.588000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d8ad70 a2=94 a3=54428f items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.588000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=224 op=UNLOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81d8ad70 a2=94 a3=54428f items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=225 op=LOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d8ada0 a2=94 a3=2 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=225 op=UNLOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81d8ada0 a2=0 a3=2 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=226 op=LOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d8ab50 a2=94 a3=4 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=226 op=UNLOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff81d8ab50 a2=94 a3=4 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=227 op=LOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d8ac50 a2=94 a3=7fff81d8add0 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.589000 audit: BPF prog-id=227 op=UNLOAD Jan 15 00:42:47.589000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff81d8ac50 a2=0 a3=7fff81d8add0 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.589000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.591000 audit: BPF prog-id=228 op=LOAD Jan 15 00:42:47.591000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d8a380 a2=94 a3=2 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.591000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.591000 audit: BPF prog-id=228 op=UNLOAD Jan 15 00:42:47.591000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff81d8a380 a2=0 a3=2 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.591000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.591000 audit: BPF prog-id=229 op=LOAD Jan 15 00:42:47.591000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d8a480 a2=94 a3=30 items=0 ppid=4496 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.591000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 00:42:47.719000 audit: BPF prog-id=230 op=LOAD Jan 15 00:42:47.719000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb643a410 a2=98 a3=0 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.719000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.725000 audit: BPF prog-id=230 op=UNLOAD Jan 15 00:42:47.725000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb643a3e0 a3=0 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.725000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.733000 audit: BPF prog-id=231 op=LOAD Jan 15 00:42:47.733000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb643a200 a2=94 a3=54428f items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.736000 audit: BPF prog-id=231 op=UNLOAD Jan 15 00:42:47.736000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb643a200 a2=94 a3=54428f items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.736000 audit: BPF prog-id=232 op=LOAD Jan 15 00:42:47.736000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb643a230 a2=94 a3=2 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.736000 audit: BPF prog-id=232 op=UNLOAD Jan 15 00:42:47.736000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb643a230 a2=0 a3=2 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:47.753000 audit: BPF prog-id=233 op=LOAD Jan 15 00:42:47.756000 audit: BPF prog-id=234 op=LOAD Jan 15 00:42:47.756000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.756000 audit: BPF prog-id=234 op=UNLOAD Jan 15 00:42:47.756000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.758000 audit: BPF prog-id=235 op=LOAD Jan 15 00:42:47.758000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.758000 audit: BPF prog-id=236 op=LOAD Jan 15 00:42:47.758000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.758000 audit: BPF prog-id=236 op=UNLOAD Jan 15 00:42:47.758000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.758000 audit: BPF prog-id=235 op=UNLOAD Jan 15 00:42:47.758000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.758000 audit: BPF prog-id=237 op=LOAD Jan 15 00:42:47.758000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5050 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343930663065666235656466366334303561313963363435643832 Jan 15 00:42:47.778537 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:47.789184 systemd-networkd[1536]: calib22ffffd989: Gained IPv6LL Jan 15 00:42:47.831000 audit[5102]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:47.831000 audit[5102]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff391ee950 a2=0 a3=7fff391ee93c items=0 ppid=2968 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:47.837000 audit[5102]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:47.837000 audit[5102]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff391ee950 a2=0 a3=0 items=0 ppid=2968 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:47.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:48.002688 containerd[1637]: time="2026-01-15T00:42:48.002571972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d667669-rqqnw,Uid:4a17e4fb-67a0-4d0e-b72d-590a1df87758,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d490f0efb5edf6c405a19c645d821a9a15de9ea3fe755804e4dedc6b6e03497\"" Jan 15 00:42:48.019603 containerd[1637]: time="2026-01-15T00:42:48.019332637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:42:48.110209 containerd[1637]: time="2026-01-15T00:42:48.110062325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:48.116454 containerd[1637]: time="2026-01-15T00:42:48.116242388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:48.116554 containerd[1637]: time="2026-01-15T00:42:48.116486634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:42:48.120013 kubelet[2808]: E0115 00:42:48.119201 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:42:48.120013 kubelet[2808]: E0115 00:42:48.119506 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:42:48.121027 kubelet[2808]: E0115 00:42:48.119668 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:48.123641 kubelet[2808]: E0115 00:42:48.123574 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:42:48.435251 kubelet[2808]: E0115 00:42:48.433983 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:48.438938 kubelet[2808]: E0115 00:42:48.438570 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:42:48.446660 kubelet[2808]: E0115 00:42:48.446608 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:42:48.513000 audit: BPF prog-id=238 op=LOAD Jan 15 00:42:48.513000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb643a0f0 a2=94 a3=1 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.514000 audit: BPF prog-id=238 op=UNLOAD Jan 15 00:42:48.514000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb643a0f0 a2=94 a3=1 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.514000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.550000 audit: BPF prog-id=239 op=LOAD Jan 15 00:42:48.550000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb643a0e0 a2=94 a3=4 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.550000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.551000 audit: BPF prog-id=239 op=UNLOAD Jan 15 00:42:48.551000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffb643a0e0 a2=0 a3=4 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.552000 audit: BPF prog-id=240 op=LOAD Jan 15 00:42:48.552000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb6439f40 a2=94 a3=5 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.552000 audit: BPF prog-id=240 op=UNLOAD Jan 15 00:42:48.552000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffb6439f40 a2=0 a3=5 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.552000 audit: BPF prog-id=241 op=LOAD Jan 15 00:42:48.552000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb643a160 a2=94 a3=6 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.552000 audit: BPF prog-id=241 op=UNLOAD Jan 15 00:42:48.552000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffb643a160 a2=0 a3=6 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.554000 audit: BPF prog-id=242 op=LOAD Jan 15 00:42:48.554000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb6439910 a2=94 a3=88 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.555000 audit: BPF prog-id=243 op=LOAD Jan 15 00:42:48.555000 audit[5100]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffb6439790 a2=94 a3=2 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.555000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.556000 audit: BPF prog-id=243 op=UNLOAD Jan 15 00:42:48.556000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffb64397c0 a2=0 a3=7fffb64398c0 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.556000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.557000 audit: BPF prog-id=242 op=UNLOAD Jan 15 00:42:48.557000 audit[5100]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=14f0d10 a2=0 a3=e21c76d80de9faf8 items=0 ppid=4496 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.557000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 00:42:48.629000 audit: BPF prog-id=229 op=UNLOAD Jan 15 00:42:48.629000 audit[4496]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000c82080 a2=0 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.629000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 00:42:48.699502 systemd-networkd[1536]: vxlan.calico: Gained IPv6LL Jan 15 00:42:48.703256 systemd-networkd[1536]: cali279976a2c23: Gained IPv6LL Jan 15 00:42:48.955051 kernel: kauditd_printk_skb: 376 callbacks suppressed Jan 15 00:42:48.955198 kernel: audit: type=1325 audit(1768437768.918:707): table=filter:131 family=2 entries=17 op=nft_register_rule pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:48.918000 audit[5123]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:48.956634 kernel: audit: type=1300 audit(1768437768.918:707): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc08a91010 a2=0 a3=7ffc08a90ffc items=0 ppid=2968 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.918000 audit[5123]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc08a91010 a2=0 a3=7ffc08a90ffc items=0 ppid=2968 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:48.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:49.063952 kernel: audit: type=1327 audit(1768437768.918:707): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:49.064033 kernel: audit: type=1325 audit(1768437769.043:708): table=nat:132 family=2 entries=35 op=nft_register_chain pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:49.043000 audit[5123]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:49.043000 audit[5123]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc08a91010 a2=0 a3=7ffc08a90ffc items=0 ppid=2968 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.142159 kernel: audit: type=1300 audit(1768437769.043:708): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc08a91010 a2=0 a3=7ffc08a90ffc items=0 ppid=2968 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:49.166933 kernel: audit: type=1327 audit(1768437769.043:708): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:49.340000 audit[5138]: NETFILTER_CFG table=mangle:133 family=2 entries=16 op=nft_register_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.372891 kernel: audit: type=1325 audit(1768437769.340:709): table=mangle:133 family=2 entries=16 op=nft_register_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.340000 audit[5138]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffea328aa80 a2=0 a3=7ffea328aa6c items=0 ppid=4496 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.438924 kernel: audit: type=1300 audit(1768437769.340:709): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffea328aa80 a2=0 a3=7ffea328aa6c items=0 ppid=4496 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.340000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:49.478060 kernel: audit: type=1327 audit(1768437769.340:709): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:49.436000 audit[5142]: NETFILTER_CFG table=nat:134 family=2 entries=15 op=nft_register_chain pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.512667 kubelet[2808]: E0115 00:42:49.511984 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:42:49.522187 kernel: audit: type=1325 audit(1768437769.436:710): table=nat:134 family=2 entries=15 op=nft_register_chain pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.436000 audit[5142]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdaaa06cd0 a2=0 a3=7ffdaaa06cbc items=0 ppid=4496 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:49.532000 audit[5136]: NETFILTER_CFG table=raw:135 family=2 entries=21 op=nft_register_chain pid=5136 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.532000 audit[5136]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff280ffc60 a2=0 a3=7fff280ffc4c items=0 ppid=4496 pid=5136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.532000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:49.549000 audit[5139]: NETFILTER_CFG table=filter:136 family=2 entries=275 op=nft_register_chain pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.549000 audit[5139]: SYSCALL arch=c000003e syscall=46 success=yes exit=161724 a0=3 a1=7fffec5fbe20 a2=0 a3=564475e70000 items=0 ppid=4496 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.549000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:49.756000 audit[5150]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:49.756000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7ffdb3553850 a2=0 a3=7ffdb355383c items=0 ppid=4496 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:49.756000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:52.041934 kubelet[2808]: E0115 00:42:52.041311 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:52.045023 containerd[1637]: time="2026-01-15T00:42:52.044669818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,}" Jan 15 00:42:52.705537 systemd-networkd[1536]: cali277217ba7d7: Link UP Jan 15 00:42:52.708045 systemd-networkd[1536]: cali277217ba7d7: Gained carrier Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.275 [INFO][5153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mts8m-eth0 coredns-668d6bf9bc- kube-system 8a228c66-9272-4272-aa48-cfa8211b2a62 900 0 2026-01-15 00:41:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mts8m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali277217ba7d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.275 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.451 [INFO][5167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" HandleID="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Workload="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.452 [INFO][5167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" HandleID="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Workload="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mts8m", "timestamp":"2026-01-15 00:42:52.451094699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.452 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.452 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.452 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.498 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.527 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.564 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.576 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.598 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.599 [INFO][5167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.609 [INFO][5167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.632 [INFO][5167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.669 [INFO][5167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.671 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" host="localhost" Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.672 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 00:42:52.785221 containerd[1637]: 2026-01-15 00:42:52.673 [INFO][5167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" HandleID="k8s-pod-network.c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Workload="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.683 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mts8m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a228c66-9272-4272-aa48-cfa8211b2a62", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mts8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali277217ba7d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.685 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.685 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali277217ba7d7 ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.713 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.714 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mts8m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a228c66-9272-4272-aa48-cfa8211b2a62", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 0, 41, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d", Pod:"coredns-668d6bf9bc-mts8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali277217ba7d7", MAC:"d2:e1:c0:9b:46:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 00:42:52.803550 containerd[1637]: 2026-01-15 00:42:52.769 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" Namespace="kube-system" Pod="coredns-668d6bf9bc-mts8m" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mts8m-eth0" Jan 15 00:42:52.932000 audit[5188]: NETFILTER_CFG table=filter:138 family=2 entries=54 op=nft_register_chain pid=5188 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 00:42:52.932000 audit[5188]: SYSCALL arch=c000003e syscall=46 success=yes exit=25540 a0=3 a1=7ffc00dadd10 a2=0 a3=7ffc00dadcfc items=0 ppid=4496 pid=5188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:52.932000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 00:42:52.963004 containerd[1637]: time="2026-01-15T00:42:52.962478079Z" level=info msg="connecting to shim c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d" address="unix:///run/containerd/s/83b62258998a38056344d9ab73ab3c74f828b03a8d866b9130cdc9c5c58f24a7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 00:42:53.149031 systemd[1]: Started cri-containerd-c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d.scope - libcontainer container c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d. Jan 15 00:42:53.230000 audit: BPF prog-id=244 op=LOAD Jan 15 00:42:53.235000 audit: BPF prog-id=245 op=LOAD Jan 15 00:42:53.235000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.237000 audit: BPF prog-id=245 op=UNLOAD Jan 15 00:42:53.237000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.242000 audit: BPF prog-id=246 op=LOAD Jan 15 00:42:53.242000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.244000 audit: BPF prog-id=247 op=LOAD Jan 15 00:42:53.244000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.244000 audit: BPF prog-id=247 op=UNLOAD Jan 15 00:42:53.244000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.244000 audit: BPF prog-id=246 op=UNLOAD Jan 15 00:42:53.244000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.245000 audit: BPF prog-id=248 op=LOAD Jan 15 00:42:53.245000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5197 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336366561613136366433336264636463383662396139643063353233 Jan 15 00:42:53.252446 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 00:42:53.422019 containerd[1637]: time="2026-01-15T00:42:53.421240467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mts8m,Uid:8a228c66-9272-4272-aa48-cfa8211b2a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d\"" Jan 15 00:42:53.425216 kubelet[2808]: E0115 00:42:53.424981 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:53.437185 containerd[1637]: time="2026-01-15T00:42:53.437053801Z" level=info msg="CreateContainer within sandbox \"c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 00:42:53.502653 containerd[1637]: time="2026-01-15T00:42:53.502223902Z" level=info msg="Container 2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0: CDI devices from CRI Config.CDIDevices: []" Jan 15 00:42:53.527092 containerd[1637]: time="2026-01-15T00:42:53.527034099Z" level=info msg="CreateContainer within sandbox \"c66eaa166d33bdcdc86b9a9d0c523eddc34b28c1b7abff2cb2e36848b2fb218d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0\"" Jan 15 00:42:53.531690 containerd[1637]: time="2026-01-15T00:42:53.531523232Z" level=info msg="StartContainer for \"2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0\"" Jan 15 00:42:53.534325 containerd[1637]: time="2026-01-15T00:42:53.534063149Z" level=info msg="connecting to shim 2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0" address="unix:///run/containerd/s/83b62258998a38056344d9ab73ab3c74f828b03a8d866b9130cdc9c5c58f24a7" protocol=ttrpc version=3 Jan 15 00:42:53.694565 systemd[1]: Started cri-containerd-2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0.scope - libcontainer container 2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0. Jan 15 00:42:53.751000 audit: BPF prog-id=249 op=LOAD Jan 15 00:42:53.752000 audit: BPF prog-id=250 op=LOAD Jan 15 00:42:53.752000 audit[5233]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.754000 audit: BPF prog-id=250 op=UNLOAD Jan 15 00:42:53.754000 audit[5233]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.755000 audit: BPF prog-id=251 op=LOAD Jan 15 00:42:53.755000 audit[5233]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.756000 audit: BPF prog-id=252 op=LOAD Jan 15 00:42:53.756000 audit[5233]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.756000 audit: BPF prog-id=252 op=UNLOAD Jan 15 00:42:53.756000 audit[5233]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.759000 audit: BPF prog-id=251 op=UNLOAD Jan 15 00:42:53.759000 audit[5233]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.759000 audit: BPF prog-id=253 op=LOAD Jan 15 00:42:53.759000 audit[5233]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5197 pid=5233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:53.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236333731363765356335663232303139363439633564353130336365 Jan 15 00:42:53.858618 containerd[1637]: time="2026-01-15T00:42:53.858576314Z" level=info msg="StartContainer for \"2637167e5c5f22019649c5d5103ce4c4c78dc55128b5c8afb0f9499059270af0\" returns successfully" Jan 15 00:42:53.870990 systemd-networkd[1536]: cali277217ba7d7: Gained IPv6LL Jan 15 00:42:54.568979 kubelet[2808]: E0115 00:42:54.567622 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:54.649589 kubelet[2808]: I0115 00:42:54.649305 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mts8m" podStartSLOduration=82.649284143 podStartE2EDuration="1m22.649284143s" podCreationTimestamp="2026-01-15 00:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 00:42:54.644958626 +0000 UTC m=+86.975994050" watchObservedRunningTime="2026-01-15 00:42:54.649284143 +0000 UTC m=+86.980319548" Jan 15 00:42:54.715000 audit[5277]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:54.725243 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 15 00:42:54.725453 kernel: audit: type=1325 audit(1768437774.715:731): table=filter:139 family=2 entries=14 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:54.715000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8b8366c0 a2=0 a3=7ffe8b8366ac items=0 ppid=2968 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:54.799445 kernel: audit: type=1300 audit(1768437774.715:731): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8b8366c0 a2=0 a3=7ffe8b8366ac items=0 ppid=2968 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:54.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:54.822179 kernel: audit: type=1327 audit(1768437774.715:731): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:54.822261 kernel: audit: type=1325 audit(1768437774.800:732): table=nat:140 family=2 entries=44 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:54.800000 audit[5277]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=5277 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:54.845867 kernel: audit: type=1300 audit(1768437774.800:732): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe8b8366c0 a2=0 a3=7ffe8b8366ac items=0 ppid=2968 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:54.800000 audit[5277]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe8b8366c0 a2=0 a3=7ffe8b8366ac items=0 ppid=2968 pid=5277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:54.896013 kernel: audit: type=1327 audit(1768437774.800:732): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:54.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:55.070000 audit[5279]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:55.100039 kernel: audit: type=1325 audit(1768437775.070:733): table=filter:141 family=2 entries=14 op=nft_register_rule pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:55.100192 kernel: audit: type=1300 audit(1768437775.070:733): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdde41ce00 a2=0 a3=7ffdde41cdec items=0 ppid=2968 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:55.070000 audit[5279]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdde41ce00 a2=0 a3=7ffdde41cdec items=0 ppid=2968 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:55.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:55.165164 kernel: audit: type=1327 audit(1768437775.070:733): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:55.192000 audit[5279]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:55.223123 kernel: audit: type=1325 audit(1768437775.192:734): table=nat:142 family=2 entries=56 op=nft_register_chain pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:42:55.192000 audit[5279]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdde41ce00 a2=0 a3=7ffdde41cdec items=0 ppid=2968 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:42:55.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:42:55.575900 kubelet[2808]: E0115 00:42:55.575500 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:56.053179 containerd[1637]: time="2026-01-15T00:42:56.053097879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:42:56.165261 containerd[1637]: time="2026-01-15T00:42:56.164657762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:56.173957 containerd[1637]: time="2026-01-15T00:42:56.170950447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:42:56.173957 containerd[1637]: time="2026-01-15T00:42:56.171042492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:56.175683 kubelet[2808]: E0115 00:42:56.174933 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:42:56.175683 kubelet[2808]: E0115 00:42:56.175129 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:42:56.175683 kubelet[2808]: E0115 00:42:56.175240 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:56.181696 containerd[1637]: time="2026-01-15T00:42:56.181466064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:42:56.272950 containerd[1637]: time="2026-01-15T00:42:56.271184439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:56.278310 containerd[1637]: time="2026-01-15T00:42:56.277213134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:42:56.278310 containerd[1637]: time="2026-01-15T00:42:56.277570560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:56.278637 kubelet[2808]: E0115 00:42:56.277995 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:42:56.278637 kubelet[2808]: E0115 00:42:56.278071 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:42:56.278637 kubelet[2808]: E0115 00:42:56.278226 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:56.280217 kubelet[2808]: E0115 00:42:56.280132 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:42:56.589644 kubelet[2808]: E0115 00:42:56.586235 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:42:57.047008 containerd[1637]: time="2026-01-15T00:42:57.046599282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:42:57.164687 containerd[1637]: time="2026-01-15T00:42:57.164212393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:42:57.167968 containerd[1637]: time="2026-01-15T00:42:57.167538232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:42:57.168440 containerd[1637]: time="2026-01-15T00:42:57.168138452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:42:57.168507 kubelet[2808]: E0115 00:42:57.168417 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:57.168507 kubelet[2808]: E0115 00:42:57.168481 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:42:57.168687 kubelet[2808]: E0115 00:42:57.168630 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:42:57.170518 kubelet[2808]: E0115 00:42:57.170049 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:43:01.043247 containerd[1637]: time="2026-01-15T00:43:01.042931883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:43:01.129154 containerd[1637]: time="2026-01-15T00:43:01.129036624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:01.133262 containerd[1637]: time="2026-01-15T00:43:01.133067698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:43:01.133262 containerd[1637]: time="2026-01-15T00:43:01.133174366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:01.134230 kubelet[2808]: E0115 00:43:01.133870 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:01.134230 kubelet[2808]: E0115 00:43:01.134053 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:01.134230 kubelet[2808]: E0115 00:43:01.134907 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znh8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:01.137088 kubelet[2808]: E0115 00:43:01.136231 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:43:01.137146 containerd[1637]: time="2026-01-15T00:43:01.135224260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:43:01.232260 containerd[1637]: time="2026-01-15T00:43:01.232081046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:01.235099 containerd[1637]: time="2026-01-15T00:43:01.234572804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:43:01.235099 containerd[1637]: time="2026-01-15T00:43:01.234871583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:01.235505 kubelet[2808]: E0115 00:43:01.235373 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:43:01.235505 kubelet[2808]: E0115 00:43:01.235482 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:43:01.235823 kubelet[2808]: E0115 00:43:01.235601 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:924b8d98b6e94388a97563446e348ded,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:01.239279 containerd[1637]: time="2026-01-15T00:43:01.239159142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:43:01.314950 containerd[1637]: time="2026-01-15T00:43:01.314623364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:01.318597 containerd[1637]: time="2026-01-15T00:43:01.318468014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:43:01.318597 containerd[1637]: time="2026-01-15T00:43:01.318569594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:01.319244 kubelet[2808]: E0115 00:43:01.318886 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:43:01.319244 kubelet[2808]: E0115 00:43:01.318936 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:43:01.319244 kubelet[2808]: E0115 00:43:01.319041 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:01.321502 kubelet[2808]: E0115 00:43:01.321084 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:43:02.044688 containerd[1637]: time="2026-01-15T00:43:02.043589961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:43:02.270085 containerd[1637]: time="2026-01-15T00:43:02.269879306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:02.272862 containerd[1637]: time="2026-01-15T00:43:02.272273955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:43:02.272862 containerd[1637]: time="2026-01-15T00:43:02.272579853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:02.273197 kubelet[2808]: E0115 00:43:02.273059 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:43:02.274128 kubelet[2808]: E0115 00:43:02.273191 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:43:02.274128 kubelet[2808]: E0115 00:43:02.273544 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:02.275542 kubelet[2808]: E0115 00:43:02.274700 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:43:02.275690 containerd[1637]: time="2026-01-15T00:43:02.274709921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:43:02.467229 containerd[1637]: time="2026-01-15T00:43:02.466910724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:02.469612 containerd[1637]: time="2026-01-15T00:43:02.469436178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:43:02.469689 containerd[1637]: time="2026-01-15T00:43:02.469578802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:02.470264 kubelet[2808]: E0115 00:43:02.470159 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:43:02.470264 kubelet[2808]: E0115 00:43:02.470207 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:43:02.470469 kubelet[2808]: E0115 00:43:02.470393 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:02.472188 kubelet[2808]: E0115 00:43:02.471900 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:43:09.044005 kubelet[2808]: E0115 00:43:09.043956 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:43:10.352949 kubelet[2808]: E0115 00:43:10.352217 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:43:12.053021 kubelet[2808]: E0115 00:43:12.052539 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:43:13.043827 kubelet[2808]: E0115 00:43:13.042170 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:43:15.042981 kubelet[2808]: E0115 00:43:15.042144 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:43:15.045846 kubelet[2808]: E0115 00:43:15.045454 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:43:16.043850 kubelet[2808]: E0115 00:43:16.043130 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:43:23.043990 containerd[1637]: time="2026-01-15T00:43:23.043674586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:43:23.145358 containerd[1637]: time="2026-01-15T00:43:23.144454546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:23.147692 containerd[1637]: time="2026-01-15T00:43:23.147643976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:43:23.147899 containerd[1637]: time="2026-01-15T00:43:23.147845011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:23.148948 kubelet[2808]: E0115 00:43:23.148474 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:23.148948 kubelet[2808]: E0115 00:43:23.148533 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:23.148948 kubelet[2808]: E0115 00:43:23.148664 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:23.150538 kubelet[2808]: E0115 00:43:23.149955 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:43:25.044991 containerd[1637]: time="2026-01-15T00:43:25.044000553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:43:25.120339 containerd[1637]: time="2026-01-15T00:43:25.119979676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:25.123392 containerd[1637]: time="2026-01-15T00:43:25.123194410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:43:25.123392 containerd[1637]: time="2026-01-15T00:43:25.123381509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:25.124005 kubelet[2808]: E0115 00:43:25.123610 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:43:25.124005 kubelet[2808]: E0115 00:43:25.123676 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:43:25.124005 kubelet[2808]: E0115 00:43:25.123927 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:25.126157 kubelet[2808]: E0115 00:43:25.125859 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:43:26.044358 containerd[1637]: time="2026-01-15T00:43:26.043999973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:43:26.137198 containerd[1637]: time="2026-01-15T00:43:26.137063203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:26.144099 containerd[1637]: time="2026-01-15T00:43:26.144028830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:43:26.144099 containerd[1637]: time="2026-01-15T00:43:26.144136530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:26.146833 kubelet[2808]: E0115 00:43:26.146332 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:43:26.148945 kubelet[2808]: E0115 00:43:26.148342 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:43:26.148945 kubelet[2808]: E0115 00:43:26.148619 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:924b8d98b6e94388a97563446e348ded,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:26.154469 containerd[1637]: time="2026-01-15T00:43:26.154349537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:43:26.222395 containerd[1637]: time="2026-01-15T00:43:26.222339872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:26.226065 containerd[1637]: time="2026-01-15T00:43:26.225496302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:43:26.226065 containerd[1637]: time="2026-01-15T00:43:26.225605174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:26.226923 kubelet[2808]: E0115 00:43:26.226654 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:43:26.226923 kubelet[2808]: E0115 00:43:26.226914 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:43:26.227529 kubelet[2808]: E0115 00:43:26.227349 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:26.229440 containerd[1637]: time="2026-01-15T00:43:26.228476365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:43:26.312369 containerd[1637]: time="2026-01-15T00:43:26.311997862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:26.315896 containerd[1637]: time="2026-01-15T00:43:26.315474198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:43:26.315896 containerd[1637]: time="2026-01-15T00:43:26.315656448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:26.316874 kubelet[2808]: E0115 00:43:26.316446 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:43:26.316874 kubelet[2808]: E0115 00:43:26.316575 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:43:26.317387 kubelet[2808]: E0115 00:43:26.317024 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:26.318102 containerd[1637]: time="2026-01-15T00:43:26.318020279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:43:26.319894 kubelet[2808]: E0115 00:43:26.319576 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:43:26.396366 containerd[1637]: time="2026-01-15T00:43:26.395938306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:26.400467 containerd[1637]: time="2026-01-15T00:43:26.400398449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:43:26.400571 containerd[1637]: time="2026-01-15T00:43:26.400523302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:26.401465 kubelet[2808]: E0115 00:43:26.401237 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:43:26.401465 kubelet[2808]: E0115 00:43:26.401455 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:43:26.401661 kubelet[2808]: E0115 00:43:26.401598 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:26.403823 kubelet[2808]: E0115 00:43:26.403462 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:43:30.047703 containerd[1637]: time="2026-01-15T00:43:30.047594120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:43:30.069453 kubelet[2808]: E0115 00:43:30.069224 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:43:30.132635 containerd[1637]: time="2026-01-15T00:43:30.132112163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:30.139451 containerd[1637]: time="2026-01-15T00:43:30.139140169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:43:30.139451 containerd[1637]: time="2026-01-15T00:43:30.139405404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:30.139926 kubelet[2808]: E0115 00:43:30.139674 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:30.141631 kubelet[2808]: E0115 00:43:30.140087 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:43:30.141631 kubelet[2808]: E0115 00:43:30.141395 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znh8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:30.142088 containerd[1637]: time="2026-01-15T00:43:30.141933649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:43:30.142977 kubelet[2808]: E0115 00:43:30.142631 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:43:30.245927 containerd[1637]: time="2026-01-15T00:43:30.245595593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:43:30.250027 containerd[1637]: time="2026-01-15T00:43:30.249694910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:43:30.250027 containerd[1637]: time="2026-01-15T00:43:30.249980413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:43:30.251375 kubelet[2808]: E0115 00:43:30.251113 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:43:30.251375 kubelet[2808]: E0115 00:43:30.251247 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:43:30.252885 kubelet[2808]: E0115 00:43:30.252445 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:43:30.254140 kubelet[2808]: E0115 00:43:30.253710 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:43:33.604437 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:50432.service - OpenSSH per-connection server daemon (10.0.0.1:50432). Jan 15 00:43:33.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:50432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:33.632239 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 15 00:43:33.632469 kernel: audit: type=1130 audit(1768437813.604:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:50432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:33.760000 audit[5344]: USER_ACCT pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.767074 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 50432 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:33.787871 kernel: audit: type=1101 audit(1768437813.760:736): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.787991 kernel: audit: type=1103 audit(1768437813.785:737): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.785000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.788353 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:33.811951 systemd-logind[1618]: New session 8 of user core. Jan 15 00:43:33.786000 audit[5344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ae0a520 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:33.848661 kernel: audit: type=1006 audit(1768437813.786:738): pid=5344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 15 00:43:33.849371 kernel: audit: type=1300 audit(1768437813.786:738): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ae0a520 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:33.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:33.851095 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 00:43:33.860869 kernel: audit: type=1327 audit(1768437813.786:738): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:33.861000 audit[5344]: USER_START pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.891899 kernel: audit: type=1105 audit(1768437813.861:739): pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.866000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:33.916891 kernel: audit: type=1103 audit(1768437813.866:740): pid=5347 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:34.163252 sshd[5347]: Connection closed by 10.0.0.1 port 50432 Jan 15 00:43:34.163876 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:34.166000 audit[5344]: USER_END pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:34.179118 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:50432.service: Deactivated successfully. Jan 15 00:43:34.189464 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 00:43:34.166000 audit[5344]: CRED_DISP pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:34.197642 systemd-logind[1618]: Session 8 logged out. Waiting for processes to exit. Jan 15 00:43:34.199932 systemd-logind[1618]: Removed session 8. Jan 15 00:43:34.217674 kernel: audit: type=1106 audit(1768437814.166:741): pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:34.218058 kernel: audit: type=1104 audit(1768437814.166:742): pid=5344 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:34.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:50432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:37.046162 kubelet[2808]: E0115 00:43:37.046076 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:43:38.044499 kubelet[2808]: E0115 00:43:38.044236 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:43:38.044499 kubelet[2808]: E0115 00:43:38.044476 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:43:39.042832 kubelet[2808]: E0115 00:43:39.042579 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:43:39.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:50442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:39.194463 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Jan 15 00:43:39.200707 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:43:39.200921 kernel: audit: type=1130 audit(1768437819.193:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:50442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:39.326000 audit[5366]: USER_ACCT pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.339630 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:39.345179 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:39.354709 kernel: audit: type=1101 audit(1768437819.326:745): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.342000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.357929 systemd-logind[1618]: New session 9 of user core. Jan 15 00:43:39.396669 kernel: audit: type=1103 audit(1768437819.342:746): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.396920 kernel: audit: type=1006 audit(1768437819.342:747): pid=5366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 00:43:39.342000 audit[5366]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18f7b540 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:39.399685 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 00:43:39.440557 kernel: audit: type=1300 audit(1768437819.342:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc18f7b540 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:39.440680 kernel: audit: type=1327 audit(1768437819.342:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:39.342000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:39.414000 audit[5366]: USER_START pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.472121 kernel: audit: type=1105 audit(1768437819.414:748): pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.421000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.496077 kernel: audit: type=1103 audit(1768437819.421:749): pid=5369 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.717934 sshd[5369]: Connection closed by 10.0.0.1 port 50442 Jan 15 00:43:39.719695 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:39.725000 audit[5366]: USER_END pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.732605 systemd-logind[1618]: Session 9 logged out. Waiting for processes to exit. Jan 15 00:43:39.735403 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:50442.service: Deactivated successfully. Jan 15 00:43:39.741541 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 00:43:39.746050 systemd-logind[1618]: Removed session 9. Jan 15 00:43:39.726000 audit[5366]: CRED_DISP pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.777180 kernel: audit: type=1106 audit(1768437819.725:750): pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.777405 kernel: audit: type=1104 audit(1768437819.726:751): pid=5366 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:39.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:50442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:43.041340 kubelet[2808]: E0115 00:43:43.041175 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:43:44.042913 kubelet[2808]: E0115 00:43:44.042510 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:43:44.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:52188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:44.738696 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:52188.service - OpenSSH per-connection server daemon (10.0.0.1:52188). Jan 15 00:43:44.744677 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:43:44.744930 kernel: audit: type=1130 audit(1768437824.738:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:52188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:44.932126 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 52188 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:44.930000 audit[5413]: USER_ACCT pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:44.935249 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:44.950249 systemd-logind[1618]: New session 10 of user core. Jan 15 00:43:44.933000 audit[5413]: CRED_ACQ pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:44.988112 kernel: audit: type=1101 audit(1768437824.930:754): pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:44.988352 kernel: audit: type=1103 audit(1768437824.933:755): pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:44.988409 kernel: audit: type=1006 audit(1768437824.933:756): pid=5413 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 00:43:44.933000 audit[5413]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedfd2cb00 a2=3 a3=0 items=0 ppid=1 pid=5413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:44.933000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:45.046811 kernel: audit: type=1300 audit(1768437824.933:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedfd2cb00 a2=3 a3=0 items=0 ppid=1 pid=5413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:45.046961 kernel: audit: type=1327 audit(1768437824.933:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:45.048898 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 00:43:45.059000 audit[5413]: USER_START pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.066000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.118234 kernel: audit: type=1105 audit(1768437825.059:757): pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.118971 kernel: audit: type=1103 audit(1768437825.066:758): pid=5416 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.362209 sshd[5416]: Connection closed by 10.0.0.1 port 52188 Jan 15 00:43:45.364154 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:45.366000 audit[5413]: USER_END pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.391193 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Jan 15 00:43:45.405877 kernel: audit: type=1106 audit(1768437825.366:759): pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.437246 kernel: audit: type=1104 audit(1768437825.367:760): pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.367000 audit[5413]: CRED_DISP pid=5413 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:52190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:45.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:52188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:45.434999 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:52188.service: Deactivated successfully. Jan 15 00:43:45.440497 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 00:43:45.451222 systemd-logind[1618]: Session 10 logged out. Waiting for processes to exit. Jan 15 00:43:45.458211 systemd-logind[1618]: Removed session 10. Jan 15 00:43:45.549413 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:45.548000 audit[5430]: USER_ACCT pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.552000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.552000 audit[5430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6d017eb0 a2=3 a3=0 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:45.552000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:45.555922 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:45.571852 systemd-logind[1618]: New session 11 of user core. Jan 15 00:43:45.584964 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 00:43:45.599000 audit[5430]: USER_START pid=5430 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.609000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.903488 sshd[5436]: Connection closed by 10.0.0.1 port 52190 Jan 15 00:43:45.904216 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:45.911000 audit[5430]: USER_END pid=5430 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.911000 audit[5430]: CRED_DISP pid=5430 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:45.922117 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:52190.service: Deactivated successfully. Jan 15 00:43:45.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:52190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:45.929476 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 00:43:45.932208 systemd-logind[1618]: Session 11 logged out. Waiting for processes to exit. Jan 15 00:43:45.946228 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:52194.service - OpenSSH per-connection server daemon (10.0.0.1:52194). Jan 15 00:43:45.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:52194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:45.951139 systemd-logind[1618]: Removed session 11. Jan 15 00:43:46.062250 kubelet[2808]: E0115 00:43:46.061892 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:43:46.103000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.105931 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 52194 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:46.107000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.107000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbbb85b40 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:46.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:46.109560 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:46.122080 systemd-logind[1618]: New session 12 of user core. Jan 15 00:43:46.136208 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 00:43:46.144000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.149000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.366691 sshd[5450]: Connection closed by 10.0.0.1 port 52194 Jan 15 00:43:46.367070 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:46.369000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.369000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:46.376887 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:52194.service: Deactivated successfully. Jan 15 00:43:46.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:52194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:46.383559 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 00:43:46.387406 systemd-logind[1618]: Session 12 logged out. Waiting for processes to exit. Jan 15 00:43:46.391157 systemd-logind[1618]: Removed session 12. Jan 15 00:43:49.041537 kubelet[2808]: E0115 00:43:49.041351 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:43:50.050538 kubelet[2808]: E0115 00:43:50.050462 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:43:51.402522 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:52208.service - OpenSSH per-connection server daemon (10.0.0.1:52208). Jan 15 00:43:51.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:52208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:51.409441 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 00:43:51.409509 kernel: audit: type=1130 audit(1768437831.402:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:52208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:51.548000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.550135 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 52208 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:51.554708 sshd-session[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:51.579140 systemd-logind[1618]: New session 13 of user core. Jan 15 00:43:51.583118 kernel: audit: type=1101 audit(1768437831.548:781): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.583168 kernel: audit: type=1103 audit(1768437831.551:782): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.551000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.645870 kernel: audit: type=1006 audit(1768437831.551:783): pid=5463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 15 00:43:51.551000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff72539910 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:51.648432 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 00:43:51.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:51.712893 kernel: audit: type=1300 audit(1768437831.551:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff72539910 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:51.713020 kernel: audit: type=1327 audit(1768437831.551:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:51.713059 kernel: audit: type=1105 audit(1768437831.674:784): pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.674000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.741482 kernel: audit: type=1103 audit(1768437831.680:785): pid=5466 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.680000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:51.983496 sshd[5466]: Connection closed by 10.0.0.1 port 52208 Jan 15 00:43:51.988230 sshd-session[5463]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:52.008000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:52.018030 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:52208.service: Deactivated successfully. Jan 15 00:43:52.018131 systemd-logind[1618]: Session 13 logged out. Waiting for processes to exit. Jan 15 00:43:52.023548 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 00:43:52.028129 systemd-logind[1618]: Removed session 13. Jan 15 00:43:52.008000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:52.049867 kubelet[2808]: E0115 00:43:52.049234 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:43:52.072868 kernel: audit: type=1106 audit(1768437832.008:786): pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:52.072986 kernel: audit: type=1104 audit(1768437832.008:787): pid=5463 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:52.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:52208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:53.048703 kubelet[2808]: E0115 00:43:53.048491 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:43:54.049888 kubelet[2808]: E0115 00:43:54.048214 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:43:55.041941 kubelet[2808]: E0115 00:43:55.040116 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:43:55.047860 kubelet[2808]: E0115 00:43:55.047101 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:43:57.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:33612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:57.024387 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:33612.service - OpenSSH per-connection server daemon (10.0.0.1:33612). Jan 15 00:43:57.031096 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:43:57.031205 kernel: audit: type=1130 audit(1768437837.023:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:33612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:43:57.057536 kubelet[2808]: E0115 00:43:57.057361 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:43:57.200536 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 33612 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:43:57.199000 audit[5480]: USER_ACCT pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.211512 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:43:57.239600 systemd-logind[1618]: New session 14 of user core. Jan 15 00:43:57.209000 audit[5480]: CRED_ACQ pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.274908 kernel: audit: type=1101 audit(1768437837.199:790): pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.275043 kernel: audit: type=1103 audit(1768437837.209:791): pid=5480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.276606 kernel: audit: type=1006 audit(1768437837.209:792): pid=5480 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 15 00:43:57.209000 audit[5480]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c56b090 a2=3 a3=0 items=0 ppid=1 pid=5480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:57.340479 kernel: audit: type=1300 audit(1768437837.209:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c56b090 a2=3 a3=0 items=0 ppid=1 pid=5480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:43:57.342977 kernel: audit: type=1327 audit(1768437837.209:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:57.209000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:43:57.341888 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 00:43:57.355000 audit[5480]: USER_START pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.361000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.434558 kernel: audit: type=1105 audit(1768437837.355:793): pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.434694 kernel: audit: type=1103 audit(1768437837.361:794): pid=5483 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.708593 sshd[5483]: Connection closed by 10.0.0.1 port 33612 Jan 15 00:43:57.710493 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Jan 15 00:43:57.713000 audit[5480]: USER_END pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.726944 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:33612.service: Deactivated successfully. Jan 15 00:43:57.733378 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 00:43:57.739925 systemd-logind[1618]: Session 14 logged out. Waiting for processes to exit. Jan 15 00:43:57.742959 systemd-logind[1618]: Removed session 14. Jan 15 00:43:57.747030 kernel: audit: type=1106 audit(1768437837.713:795): pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.713000 audit[5480]: CRED_DISP pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.783935 kernel: audit: type=1104 audit(1768437837.713:796): pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:43:57.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:33612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:00.045607 kubelet[2808]: E0115 00:44:00.045541 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:44:02.046630 kubelet[2808]: E0115 00:44:02.046543 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:44:02.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:37684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:02.737232 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:37684.service - OpenSSH per-connection server daemon (10.0.0.1:37684). Jan 15 00:44:02.746994 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:02.747078 kernel: audit: type=1130 audit(1768437842.736:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:37684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:02.849000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.850337 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 37684 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:02.866424 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:02.864000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.882451 systemd-logind[1618]: New session 15 of user core. Jan 15 00:44:02.907581 kernel: audit: type=1101 audit(1768437842.849:799): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.908897 kernel: audit: type=1103 audit(1768437842.864:800): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.909388 kernel: audit: type=1006 audit(1768437842.864:801): pid=5498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 00:44:02.864000 audit[5498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe35917230 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:02.925170 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 00:44:02.954414 kernel: audit: type=1300 audit(1768437842.864:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe35917230 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:02.954529 kernel: audit: type=1327 audit(1768437842.864:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:02.864000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:02.935000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.994414 kernel: audit: type=1105 audit(1768437842.935:802): pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.994543 kernel: audit: type=1103 audit(1768437842.942:803): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:02.942000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:03.137439 sshd[5501]: Connection closed by 10.0.0.1 port 37684 Jan 15 00:44:03.138098 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:03.145000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:03.152649 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:37684.service: Deactivated successfully. Jan 15 00:44:03.159148 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 00:44:03.163419 systemd-logind[1618]: Session 15 logged out. Waiting for processes to exit. Jan 15 00:44:03.168212 systemd-logind[1618]: Removed session 15. Jan 15 00:44:03.146000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:03.212628 kernel: audit: type=1106 audit(1768437843.145:804): pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:03.214060 kernel: audit: type=1104 audit(1768437843.146:805): pid=5498 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:03.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:37684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:04.052354 containerd[1637]: time="2026-01-15T00:44:04.051694941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:44:04.130422 containerd[1637]: time="2026-01-15T00:44:04.130118726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:04.135924 containerd[1637]: time="2026-01-15T00:44:04.135206077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:04.135924 containerd[1637]: time="2026-01-15T00:44:04.135362350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:44:04.140063 kubelet[2808]: E0115 00:44:04.136137 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:44:04.140063 kubelet[2808]: E0115 00:44:04.136216 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:44:04.142022 kubelet[2808]: E0115 00:44:04.140044 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-cmczc_calico-apiserver(e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:04.143397 kubelet[2808]: E0115 00:44:04.143234 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:44:06.042532 kubelet[2808]: E0115 00:44:06.042449 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:44:06.046033 containerd[1637]: time="2026-01-15T00:44:06.045417675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 00:44:06.147912 containerd[1637]: time="2026-01-15T00:44:06.147558745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:06.157226 containerd[1637]: time="2026-01-15T00:44:06.153971763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 00:44:06.157226 containerd[1637]: time="2026-01-15T00:44:06.154093260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:06.157500 kubelet[2808]: E0115 00:44:06.157075 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:44:06.157500 kubelet[2808]: E0115 00:44:06.157129 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 00:44:06.157500 kubelet[2808]: E0115 00:44:06.157392 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krx4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c6d667669-rqqnw_calico-system(4a17e4fb-67a0-4d0e-b72d-590a1df87758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:06.158611 kubelet[2808]: E0115 00:44:06.158577 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:44:07.043891 containerd[1637]: time="2026-01-15T00:44:07.043427648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 00:44:07.144462 containerd[1637]: time="2026-01-15T00:44:07.143923194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:07.147675 containerd[1637]: time="2026-01-15T00:44:07.147371026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 00:44:07.148385 containerd[1637]: time="2026-01-15T00:44:07.147659186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:07.149110 kubelet[2808]: E0115 00:44:07.148176 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:44:07.149110 kubelet[2808]: E0115 00:44:07.148232 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 00:44:07.149110 kubelet[2808]: E0115 00:44:07.148451 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:924b8d98b6e94388a97563446e348ded,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:07.155200 containerd[1637]: time="2026-01-15T00:44:07.155101446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 00:44:07.258035 containerd[1637]: time="2026-01-15T00:44:07.257657424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:07.261872 containerd[1637]: time="2026-01-15T00:44:07.261655888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 00:44:07.262436 containerd[1637]: time="2026-01-15T00:44:07.261923679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:07.264347 kubelet[2808]: E0115 00:44:07.264034 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:44:07.264538 kubelet[2808]: E0115 00:44:07.264183 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 00:44:07.266010 kubelet[2808]: E0115 00:44:07.264616 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl56x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-647ddb774d-q28b2_calico-system(2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:07.270050 kubelet[2808]: E0115 00:44:07.269968 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:44:08.043185 kubelet[2808]: E0115 00:44:08.042681 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:44:08.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:37700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:08.169204 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:37700.service - OpenSSH per-connection server daemon (10.0.0.1:37700). Jan 15 00:44:08.180934 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:08.180995 kernel: audit: type=1130 audit(1768437848.168:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:37700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:08.324000 audit[5520]: USER_ACCT pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.326129 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 37700 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:08.331204 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:08.351927 systemd-logind[1618]: New session 16 of user core. Jan 15 00:44:08.355152 kernel: audit: type=1101 audit(1768437848.324:808): pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.328000 audit[5520]: CRED_ACQ pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.413024 kernel: audit: type=1103 audit(1768437848.328:809): pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.413136 kernel: audit: type=1006 audit(1768437848.328:810): pid=5520 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 15 00:44:08.413200 kernel: audit: type=1300 audit(1768437848.328:810): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9cdb2fe0 a2=3 a3=0 items=0 ppid=1 pid=5520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:08.328000 audit[5520]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9cdb2fe0 a2=3 a3=0 items=0 ppid=1 pid=5520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:08.443156 kernel: audit: type=1327 audit(1768437848.328:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:08.328000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:08.442126 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 00:44:08.453000 audit[5520]: USER_START pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.503913 kernel: audit: type=1105 audit(1768437848.453:811): pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.458000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.529951 kernel: audit: type=1103 audit(1768437848.458:812): pid=5523 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.756373 sshd[5523]: Connection closed by 10.0.0.1 port 37700 Jan 15 00:44:08.758218 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:08.760000 audit[5520]: USER_END pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.760000 audit[5520]: CRED_DISP pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.815993 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:37700.service: Deactivated successfully. Jan 15 00:44:08.816125 systemd-logind[1618]: Session 16 logged out. Waiting for processes to exit. Jan 15 00:44:08.820704 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 00:44:08.826929 systemd-logind[1618]: Removed session 16. Jan 15 00:44:08.841612 kernel: audit: type=1106 audit(1768437848.760:813): pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.841845 kernel: audit: type=1104 audit(1768437848.760:814): pid=5520 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:08.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:37700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:10.047434 kubelet[2808]: E0115 00:44:10.047123 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:44:13.044868 containerd[1637]: time="2026-01-15T00:44:13.044593789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 00:44:13.120609 containerd[1637]: time="2026-01-15T00:44:13.120540372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:13.123500 containerd[1637]: time="2026-01-15T00:44:13.123203995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 00:44:13.123638 containerd[1637]: time="2026-01-15T00:44:13.123386096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:13.124649 kubelet[2808]: E0115 00:44:13.124455 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:44:13.124649 kubelet[2808]: E0115 00:44:13.124596 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 00:44:13.125416 kubelet[2808]: E0115 00:44:13.124884 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:13.129106 containerd[1637]: time="2026-01-15T00:44:13.128900444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 00:44:13.216377 containerd[1637]: time="2026-01-15T00:44:13.216026917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:13.221605 containerd[1637]: time="2026-01-15T00:44:13.221434272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 00:44:13.221695 containerd[1637]: time="2026-01-15T00:44:13.221622093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:13.222524 kubelet[2808]: E0115 00:44:13.221966 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:44:13.222524 kubelet[2808]: E0115 00:44:13.222035 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 00:44:13.222524 kubelet[2808]: E0115 00:44:13.222187 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q7lg4_calico-system(5de5824a-09ed-431d-8ba6-dbc85139b40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:13.224979 kubelet[2808]: E0115 00:44:13.224706 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:44:13.807952 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:13.808100 kernel: audit: type=1130 audit(1768437853.784:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:39312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:13.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:39312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:13.785212 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:39312.service - OpenSSH per-connection server daemon (10.0.0.1:39312). Jan 15 00:44:13.943000 audit[5571]: USER_ACCT pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:13.950946 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:13.954652 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:13.965614 systemd-logind[1618]: New session 17 of user core. Jan 15 00:44:13.952000 audit[5571]: CRED_ACQ pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.000382 kernel: audit: type=1101 audit(1768437853.943:817): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.000493 kernel: audit: type=1103 audit(1768437853.952:818): pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.000526 kernel: audit: type=1006 audit(1768437853.952:819): pid=5571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 15 00:44:13.952000 audit[5571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffca9f1980 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:13.952000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:14.068156 kernel: audit: type=1300 audit(1768437853.952:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffca9f1980 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:14.068336 kernel: audit: type=1327 audit(1768437853.952:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:14.070578 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 00:44:14.080000 audit[5571]: USER_START pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.085000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.142537 kernel: audit: type=1105 audit(1768437854.080:820): pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.142632 kernel: audit: type=1103 audit(1768437854.085:821): pid=5574 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.304008 sshd[5574]: Connection closed by 10.0.0.1 port 39312 Jan 15 00:44:14.307192 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:14.311000 audit[5571]: USER_END pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.318021 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:39312.service: Deactivated successfully. Jan 15 00:44:14.318373 systemd-logind[1618]: Session 17 logged out. Waiting for processes to exit. Jan 15 00:44:14.323498 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 00:44:14.331044 systemd-logind[1618]: Removed session 17. Jan 15 00:44:14.311000 audit[5571]: CRED_DISP pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.374091 kernel: audit: type=1106 audit(1768437854.311:822): pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.374192 kernel: audit: type=1104 audit(1768437854.311:823): pid=5571 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:14.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:39312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:17.042579 kubelet[2808]: E0115 00:44:17.042157 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:44:17.042579 kubelet[2808]: E0115 00:44:17.042451 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:44:19.046937 kubelet[2808]: E0115 00:44:19.043505 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:44:19.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:39322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:19.343349 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:39322.service - OpenSSH per-connection server daemon (10.0.0.1:39322). Jan 15 00:44:19.350995 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:19.351048 kernel: audit: type=1130 audit(1768437859.342:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:39322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:19.477457 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 39322 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:19.476000 audit[5594]: USER_ACCT pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.482919 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:19.508710 systemd-logind[1618]: New session 18 of user core. Jan 15 00:44:19.517005 kernel: audit: type=1101 audit(1768437859.476:826): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.478000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.550033 kernel: audit: type=1103 audit(1768437859.478:827): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.551488 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 00:44:19.481000 audit[5594]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec31727a0 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:19.607571 kernel: audit: type=1006 audit(1768437859.481:828): pid=5594 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 15 00:44:19.608187 kernel: audit: type=1300 audit(1768437859.481:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec31727a0 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:19.608226 kernel: audit: type=1327 audit(1768437859.481:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:19.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:19.560000 audit[5594]: USER_START pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.658397 kernel: audit: type=1105 audit(1768437859.560:829): pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.565000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.687869 kernel: audit: type=1103 audit(1768437859.565:830): pid=5597 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.849600 sshd[5597]: Connection closed by 10.0.0.1 port 39322 Jan 15 00:44:19.851168 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:19.855000 audit[5594]: USER_END pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.869507 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:39322.service: Deactivated successfully. Jan 15 00:44:19.893451 kernel: audit: type=1106 audit(1768437859.855:831): pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.855000 audit[5594]: CRED_DISP pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.894616 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 00:44:19.899980 systemd-logind[1618]: Session 18 logged out. Waiting for processes to exit. Jan 15 00:44:19.904398 systemd-logind[1618]: Removed session 18. Jan 15 00:44:19.918138 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:39324.service - OpenSSH per-connection server daemon (10.0.0.1:39324). Jan 15 00:44:19.925561 kernel: audit: type=1104 audit(1768437859.855:832): pid=5594 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:19.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:39322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:19.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:39324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:20.093000 audit[5611]: USER_ACCT pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.096006 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 39324 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:20.097000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.097000 audit[5611]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc743ab0f0 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:20.097000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:20.099402 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:20.114111 systemd-logind[1618]: New session 19 of user core. Jan 15 00:44:20.125355 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 00:44:20.136000 audit[5611]: USER_START pid=5611 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.141000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.883648 sshd[5615]: Connection closed by 10.0.0.1 port 39324 Jan 15 00:44:20.891103 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:20.914000 audit[5611]: USER_END pid=5611 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.915000 audit[5611]: CRED_DISP pid=5611 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:20.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:39334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:20.915173 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:39334.service - OpenSSH per-connection server daemon (10.0.0.1:39334). Jan 15 00:44:20.924082 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:39324.service: Deactivated successfully. Jan 15 00:44:20.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:39324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:20.930968 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 00:44:20.944107 systemd-logind[1618]: Session 19 logged out. Waiting for processes to exit. Jan 15 00:44:20.950171 systemd-logind[1618]: Removed session 19. Jan 15 00:44:21.055000 audit[5624]: USER_ACCT pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:21.056548 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 39334 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:21.057000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:21.057000 audit[5624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb0477860 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:21.057000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:21.059370 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:21.076016 systemd-logind[1618]: New session 20 of user core. Jan 15 00:44:21.089595 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 00:44:21.104000 audit[5624]: USER_START pid=5624 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:21.110000 audit[5630]: CRED_ACQ pid=5630 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.051914 containerd[1637]: time="2026-01-15T00:44:22.051389151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 00:44:22.148195 containerd[1637]: time="2026-01-15T00:44:22.147131486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:22.153016 containerd[1637]: time="2026-01-15T00:44:22.152600934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 00:44:22.153016 containerd[1637]: time="2026-01-15T00:44:22.152988166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:22.156188 kubelet[2808]: E0115 00:44:22.156069 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:44:22.159074 kubelet[2808]: E0115 00:44:22.156203 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 00:44:22.159074 kubelet[2808]: E0115 00:44:22.156442 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znh8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c9bd68d8f-tstqw_calico-apiserver(2f1337fc-8e0d-4906-b1f0-90d0896b3f07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:22.159074 kubelet[2808]: E0115 00:44:22.157557 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:44:22.556681 sshd[5630]: Connection closed by 10.0.0.1 port 39334 Jan 15 00:44:22.560384 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:22.563000 audit[5624]: USER_END pid=5624 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.567000 audit[5624]: CRED_DISP pid=5624 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.578520 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:39334.service: Deactivated successfully. Jan 15 00:44:22.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:39334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:22.585017 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 00:44:22.592131 systemd-logind[1618]: Session 20 logged out. Waiting for processes to exit. Jan 15 00:44:22.599435 systemd-logind[1618]: Removed session 20. Jan 15 00:44:22.608052 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:37096.service - OpenSSH per-connection server daemon (10.0.0.1:37096). Jan 15 00:44:22.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:37096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:22.745000 audit[5652]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:22.745000 audit[5652]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc1edf7700 a2=0 a3=7ffc1edf76ec items=0 ppid=2968 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:22.745000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:22.748000 audit[5649]: USER_ACCT pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.751157 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 37096 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:22.751000 audit[5649]: CRED_ACQ pid=5649 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.751000 audit[5649]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb1844b60 a2=3 a3=0 items=0 ppid=1 pid=5649 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:22.751000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:22.753944 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:22.754000 audit[5652]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:22.754000 audit[5652]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1edf7700 a2=0 a3=0 items=0 ppid=2968 pid=5652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:22.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:22.767073 systemd-logind[1618]: New session 21 of user core. Jan 15 00:44:22.773943 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 00:44:22.784000 audit[5649]: USER_START pid=5649 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.799000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:22.841000 audit[5656]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:22.841000 audit[5656]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3ff5f950 a2=0 a3=7fff3ff5f93c items=0 ppid=2968 pid=5656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:22.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:22.850000 audit[5656]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:22.850000 audit[5656]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3ff5f950 a2=0 a3=0 items=0 ppid=2968 pid=5656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:22.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:23.057144 kubelet[2808]: E0115 00:44:23.056681 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:44:23.455091 sshd[5654]: Connection closed by 10.0.0.1 port 37096 Jan 15 00:44:23.456497 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:23.463000 audit[5649]: USER_END pid=5649 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.463000 audit[5649]: CRED_DISP pid=5649 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.478476 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:37096.service: Deactivated successfully. Jan 15 00:44:23.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:37096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:23.484874 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 00:44:23.492537 systemd-logind[1618]: Session 21 logged out. Waiting for processes to exit. Jan 15 00:44:23.517312 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:37098.service - OpenSSH per-connection server daemon (10.0.0.1:37098). Jan 15 00:44:23.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:37098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:23.521195 systemd-logind[1618]: Removed session 21. Jan 15 00:44:23.661000 audit[5667]: USER_ACCT pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.663609 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 37098 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:23.664000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.664000 audit[5667]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd42283750 a2=3 a3=0 items=0 ppid=1 pid=5667 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:23.664000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:23.667909 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:23.685902 systemd-logind[1618]: New session 22 of user core. Jan 15 00:44:23.702212 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 00:44:23.711000 audit[5667]: USER_START pid=5667 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.715000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.991173 sshd[5670]: Connection closed by 10.0.0.1 port 37098 Jan 15 00:44:23.991025 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:23.993000 audit[5667]: USER_END pid=5667 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:23.993000 audit[5667]: CRED_DISP pid=5667 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:24.005197 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:37098.service: Deactivated successfully. Jan 15 00:44:24.006427 systemd-logind[1618]: Session 22 logged out. Waiting for processes to exit. Jan 15 00:44:24.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:37098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:24.016052 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 00:44:24.026014 systemd-logind[1618]: Removed session 22. Jan 15 00:44:24.046045 containerd[1637]: time="2026-01-15T00:44:24.044404718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 00:44:24.124453 containerd[1637]: time="2026-01-15T00:44:24.124391037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 00:44:24.128571 containerd[1637]: time="2026-01-15T00:44:24.128217238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 00:44:24.128571 containerd[1637]: time="2026-01-15T00:44:24.128412071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 00:44:24.131017 kubelet[2808]: E0115 00:44:24.129179 2808 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:44:24.131017 kubelet[2808]: E0115 00:44:24.129305 2808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 00:44:24.131017 kubelet[2808]: E0115 00:44:24.129631 2808 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9ztx8_calico-system(1429bbd4-fe5b-4951-85dc-5a892fa35b68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 00:44:24.131017 kubelet[2808]: E0115 00:44:24.130948 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:44:27.044037 kubelet[2808]: E0115 00:44:27.039947 2808 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 00:44:27.046414 kubelet[2808]: E0115 00:44:27.045132 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:44:28.051022 kubelet[2808]: E0115 00:44:28.046046 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:44:29.013696 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:37114.service - OpenSSH per-connection server daemon (10.0.0.1:37114). Jan 15 00:44:29.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:37114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:29.043055 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 15 00:44:29.043170 kernel: audit: type=1130 audit(1768437869.012:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:37114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:29.123000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.128528 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 37114 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:29.131670 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:29.148990 systemd-logind[1618]: New session 23 of user core. Jan 15 00:44:29.126000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.182314 kernel: audit: type=1101 audit(1768437869.123:875): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.182384 kernel: audit: type=1103 audit(1768437869.126:876): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.197078 kernel: audit: type=1006 audit(1768437869.126:877): pid=5698 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 15 00:44:29.126000 audit[5698]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb4f669c0 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:29.228520 kernel: audit: type=1300 audit(1768437869.126:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb4f669c0 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:29.126000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:29.231998 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 00:44:29.241912 kernel: audit: type=1327 audit(1768437869.126:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:29.242000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.279919 kernel: audit: type=1105 audit(1768437869.242:878): pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.250000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.308972 kernel: audit: type=1103 audit(1768437869.250:879): pid=5701 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.440993 sshd[5701]: Connection closed by 10.0.0.1 port 37114 Jan 15 00:44:29.442337 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:29.443000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.453974 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:37114.service: Deactivated successfully. Jan 15 00:44:29.459089 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 00:44:29.461545 systemd-logind[1618]: Session 23 logged out. Waiting for processes to exit. Jan 15 00:44:29.465026 systemd-logind[1618]: Removed session 23. Jan 15 00:44:29.443000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.502663 kernel: audit: type=1106 audit(1768437869.443:880): pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.502915 kernel: audit: type=1104 audit(1768437869.443:881): pid=5698 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:29.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:37114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:30.945000 audit[5714]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:30.945000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1b06aac0 a2=0 a3=7ffc1b06aaac items=0 ppid=2968 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:30.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:30.956000 audit[5714]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 00:44:30.956000 audit[5714]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc1b06aac0 a2=0 a3=7ffc1b06aaac items=0 ppid=2968 pid=5714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:30.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 00:44:34.043506 kubelet[2808]: E0115 00:44:34.043180 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c6d667669-rqqnw" podUID="4a17e4fb-67a0-4d0e-b72d-590a1df87758" Jan 15 00:44:34.049469 kubelet[2808]: E0115 00:44:34.047203 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-647ddb774d-q28b2" podUID="2bb8dcf6-c0ad-46fb-ba76-7aaeab87e7a9" Jan 15 00:44:34.467054 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:59218.service - OpenSSH per-connection server daemon (10.0.0.1:59218). Jan 15 00:44:34.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:59218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:34.491995 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 00:44:34.492136 kernel: audit: type=1130 audit(1768437874.465:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:59218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:34.606000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.611156 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 59218 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:34.642024 kernel: audit: type=1101 audit(1768437874.606:886): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.642000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.645404 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:34.673916 kernel: audit: type=1103 audit(1768437874.642:887): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.681514 systemd-logind[1618]: New session 24 of user core. Jan 15 00:44:34.702924 kernel: audit: type=1006 audit(1768437874.642:888): pid=5718 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 15 00:44:34.642000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8392fc00 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:34.708582 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 00:44:34.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:34.749065 kernel: audit: type=1300 audit(1768437874.642:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8392fc00 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:34.749124 kernel: audit: type=1327 audit(1768437874.642:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:34.725000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.781819 kernel: audit: type=1105 audit(1768437874.725:889): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.737000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.822079 kernel: audit: type=1103 audit(1768437874.737:890): pid=5721 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.977584 sshd[5721]: Connection closed by 10.0.0.1 port 59218 Jan 15 00:44:34.978550 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:34.984000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:35.037945 kernel: audit: type=1106 audit(1768437874.984:891): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:34.997597 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:59218.service: Deactivated successfully. Jan 15 00:44:35.008583 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 00:44:35.013370 systemd-logind[1618]: Session 24 logged out. Waiting for processes to exit. Jan 15 00:44:35.019909 systemd-logind[1618]: Removed session 24. Jan 15 00:44:34.984000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:35.064129 kubelet[2808]: E0115 00:44:35.044707 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:44:35.064129 kubelet[2808]: E0115 00:44:35.048512 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:44:34.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:59218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:35.081077 kernel: audit: type=1104 audit(1768437874.984:892): pid=5718 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.004323 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:59232.service - OpenSSH per-connection server daemon (10.0.0.1:59232). Jan 15 00:44:40.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:59232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:40.012038 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:40.012122 kernel: audit: type=1130 audit(1768437880.003:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:59232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:40.047481 kubelet[2808]: E0115 00:44:40.047158 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7lg4" podUID="5de5824a-09ed-431d-8ba6-dbc85139b40f" Jan 15 00:44:40.201000 audit[5734]: USER_ACCT pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.204991 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 59232 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:40.242115 kernel: audit: type=1101 audit(1768437880.201:895): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.242000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.245951 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:40.262133 systemd-logind[1618]: New session 25 of user core. Jan 15 00:44:40.300653 kernel: audit: type=1103 audit(1768437880.242:896): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.300907 kernel: audit: type=1006 audit(1768437880.243:897): pid=5734 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 15 00:44:40.243000 audit[5734]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38fcf190 a2=3 a3=0 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:40.301941 kernel: audit: type=1300 audit(1768437880.243:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38fcf190 a2=3 a3=0 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:40.303449 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 00:44:40.243000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:40.353479 kernel: audit: type=1327 audit(1768437880.243:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:40.314000 audit[5734]: USER_START pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.399049 kernel: audit: type=1105 audit(1768437880.314:898): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.318000 audit[5756]: CRED_ACQ pid=5756 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.430084 kernel: audit: type=1103 audit(1768437880.318:899): pid=5756 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.588548 sshd[5756]: Connection closed by 10.0.0.1 port 59232 Jan 15 00:44:40.590136 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:40.608000 audit[5734]: USER_END pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.615099 systemd-logind[1618]: Session 25 logged out. Waiting for processes to exit. Jan 15 00:44:40.616584 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:59232.service: Deactivated successfully. Jan 15 00:44:40.620995 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 00:44:40.624461 systemd-logind[1618]: Removed session 25. Jan 15 00:44:40.608000 audit[5734]: CRED_DISP pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.663921 kernel: audit: type=1106 audit(1768437880.608:900): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.663998 kernel: audit: type=1104 audit(1768437880.608:901): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:40.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:59232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:42.042787 kubelet[2808]: E0115 00:44:42.042402 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-cmczc" podUID="e04d5d65-cdba-4eaf-b279-5b0e7c7a9f88" Jan 15 00:44:45.611448 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:45060.service - OpenSSH per-connection server daemon (10.0.0.1:45060). Jan 15 00:44:45.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:45060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:45.618959 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 00:44:45.619006 kernel: audit: type=1130 audit(1768437885.609:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:45060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:45.738381 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 45060 ssh2: RSA SHA256:Dl+b0QTZTpY7oDWBQQl+4rfxVj/xV2OrnbVOImxw67E Jan 15 00:44:45.736000 audit[5778]: USER_ACCT pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.741319 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 00:44:45.756900 systemd-logind[1618]: New session 26 of user core. Jan 15 00:44:45.738000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.794160 kernel: audit: type=1101 audit(1768437885.736:904): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.794558 kernel: audit: type=1103 audit(1768437885.738:905): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.794595 kernel: audit: type=1006 audit(1768437885.738:906): pid=5778 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 15 00:44:45.797562 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 00:44:45.738000 audit[5778]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1599ccf0 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:45.840560 kernel: audit: type=1300 audit(1768437885.738:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1599ccf0 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 00:44:45.840672 kernel: audit: type=1327 audit(1768437885.738:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:45.738000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 00:44:45.810000 audit[5778]: USER_START pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.891131 kernel: audit: type=1105 audit(1768437885.810:907): pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.894007 kernel: audit: type=1103 audit(1768437885.815:908): pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:45.815000 audit[5781]: CRED_ACQ pid=5781 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:46.060844 kubelet[2808]: E0115 00:44:46.060425 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9bd68d8f-tstqw" podUID="2f1337fc-8e0d-4906-b1f0-90d0896b3f07" Jan 15 00:44:46.061551 kubelet[2808]: E0115 00:44:46.060910 2808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9ztx8" podUID="1429bbd4-fe5b-4951-85dc-5a892fa35b68" Jan 15 00:44:46.084874 sshd[5781]: Connection closed by 10.0.0.1 port 45060 Jan 15 00:44:46.086301 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jan 15 00:44:46.091000 audit[5778]: USER_END pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:46.100544 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:45060.service: Deactivated successfully. Jan 15 00:44:46.108370 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 00:44:46.115101 systemd-logind[1618]: Session 26 logged out. Waiting for processes to exit. Jan 15 00:44:46.121050 systemd-logind[1618]: Removed session 26. Jan 15 00:44:46.132090 kernel: audit: type=1106 audit(1768437886.091:909): pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:46.092000 audit[5778]: CRED_DISP pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 00:44:46.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:45060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 00:44:46.177999 kernel: audit: type=1104 audit(1768437886.092:910): pid=5778 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success'