Jan 16 21:19:35.559129 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 18:44:02 -00 2026 Jan 16 21:19:35.559168 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:19:35.559182 kernel: BIOS-provided physical RAM map: Jan 16 21:19:35.559197 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 16 21:19:35.559207 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 16 21:19:35.559217 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 16 21:19:35.559228 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 16 21:19:35.559238 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 16 21:19:35.559248 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 16 21:19:35.559258 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 16 21:19:35.559268 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 16 21:19:35.559281 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 16 21:19:35.559291 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 16 21:19:35.559301 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 16 21:19:35.559313 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 16 21:19:35.559324 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 16 21:19:35.559337 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 16 21:19:35.559348 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 16 21:19:35.559358 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 16 21:19:35.559369 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 16 21:19:35.559380 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 16 21:19:35.559391 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 16 21:19:35.559402 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 16 21:19:35.559412 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 21:19:35.559423 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 16 21:19:35.559433 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 16 21:19:35.559446 kernel: NX (Execute Disable) protection: active Jan 16 21:19:35.559457 kernel: APIC: Static calls initialized Jan 16 21:19:35.559467 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 16 21:19:35.559478 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 16 21:19:35.559488 kernel: extended physical RAM map: Jan 16 21:19:35.559499 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 16 21:19:35.559571 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 16 21:19:35.559583 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 16 21:19:35.559594 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 16 21:19:35.559604 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 16 21:19:35.559615 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 16 21:19:35.559630 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 16 21:19:35.559640 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 16 21:19:35.559651 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 16 21:19:35.559667 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 16 21:19:35.559681 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 16 21:19:35.559692 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 16 21:19:35.559703 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 16 21:19:35.559714 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 16 21:19:35.566167 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 16 21:19:35.566195 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 16 21:19:35.566209 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 16 21:19:35.566219 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 16 21:19:35.566228 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 16 21:19:35.566244 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 16 21:19:35.566253 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 16 21:19:35.566263 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 16 21:19:35.566273 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 16 21:19:35.566282 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 16 21:19:35.566295 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 21:19:35.566306 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 16 21:19:35.566318 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 16 21:19:35.566329 kernel: efi: EFI v2.7 by EDK II Jan 16 21:19:35.566339 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 16 21:19:35.566349 kernel: random: crng init done Jan 16 21:19:35.566363 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 16 21:19:35.566373 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 16 21:19:35.566383 kernel: secureboot: Secure boot disabled Jan 16 21:19:35.566392 kernel: SMBIOS 2.8 present. Jan 16 21:19:35.566402 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 16 21:19:35.566415 kernel: DMI: Memory slots populated: 1/1 Jan 16 21:19:35.566426 kernel: Hypervisor detected: KVM Jan 16 21:19:35.566439 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 16 21:19:35.566449 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 21:19:35.566459 kernel: kvm-clock: using sched offset of 12691284503 cycles Jan 16 21:19:35.566469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 21:19:35.566484 kernel: tsc: Detected 2445.426 MHz processor Jan 16 21:19:35.566494 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 21:19:35.566573 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 21:19:35.566586 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 16 21:19:35.566596 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 16 21:19:35.566606 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 21:19:35.566617 kernel: Using GB pages for direct mapping Jan 16 21:19:35.566634 kernel: ACPI: Early table checksum verification disabled Jan 16 21:19:35.566647 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 16 21:19:35.566658 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 16 21:19:35.566668 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566678 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566688 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 16 21:19:35.566699 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566713 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566723 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566736 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 21:19:35.566749 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 21:19:35.566759 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 16 21:19:35.566769 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 16 21:19:35.566779 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 16 21:19:35.566793 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 16 21:19:35.566803 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 16 21:19:35.566813 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 16 21:19:35.566826 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 16 21:19:35.566839 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 16 21:19:35.566849 kernel: No NUMA configuration found Jan 16 21:19:35.566860 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 16 21:19:35.566870 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 16 21:19:35.566884 kernel: Zone ranges: Jan 16 21:19:35.566895 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 21:19:35.566905 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 16 21:19:35.566917 kernel: Normal empty Jan 16 21:19:35.566929 kernel: Device empty Jan 16 21:19:35.566941 kernel: Movable zone start for each node Jan 16 21:19:35.566954 kernel: Early memory node ranges Jan 16 21:19:35.566968 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 16 21:19:35.566978 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 16 21:19:35.566989 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 16 21:19:35.566999 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 16 21:19:35.567009 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 16 21:19:35.567018 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 16 21:19:35.567029 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 16 21:19:35.567042 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 16 21:19:35.567148 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 16 21:19:35.567164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 21:19:35.567189 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 16 21:19:35.567205 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 16 21:19:35.567215 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 21:19:35.567226 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 16 21:19:35.567236 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 16 21:19:35.570630 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 16 21:19:35.570648 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 16 21:19:35.570670 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 16 21:19:35.570681 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 21:19:35.570691 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 21:19:35.570702 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 21:19:35.570716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 21:19:35.570727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 21:19:35.570738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 21:19:35.570749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 21:19:35.570762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 21:19:35.570775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 21:19:35.570791 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 21:19:35.570808 kernel: TSC deadline timer available Jan 16 21:19:35.570822 kernel: CPU topo: Max. logical packages: 1 Jan 16 21:19:35.570834 kernel: CPU topo: Max. logical dies: 1 Jan 16 21:19:35.570848 kernel: CPU topo: Max. dies per package: 1 Jan 16 21:19:35.570863 kernel: CPU topo: Max. threads per core: 1 Jan 16 21:19:35.570875 kernel: CPU topo: Num. cores per package: 4 Jan 16 21:19:35.570887 kernel: CPU topo: Num. threads per package: 4 Jan 16 21:19:35.570900 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 16 21:19:35.570918 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 21:19:35.570932 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 16 21:19:35.570946 kernel: kvm-guest: setup PV sched yield Jan 16 21:19:35.570962 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 16 21:19:35.570976 kernel: Booting paravirtualized kernel on KVM Jan 16 21:19:35.570989 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 21:19:35.571005 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 16 21:19:35.571023 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 16 21:19:35.571037 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 16 21:19:35.571052 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 16 21:19:35.571172 kernel: kvm-guest: PV spinlocks enabled Jan 16 21:19:35.571188 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 16 21:19:35.571206 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:19:35.571221 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 21:19:35.571240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 21:19:35.571253 kernel: Fallback order for Node 0: 0 Jan 16 21:19:35.571269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 16 21:19:35.571283 kernel: Policy zone: DMA32 Jan 16 21:19:35.571298 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 21:19:35.571312 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 16 21:19:35.571327 kernel: ftrace: allocating 40128 entries in 157 pages Jan 16 21:19:35.571347 kernel: ftrace: allocated 157 pages with 5 groups Jan 16 21:19:35.571360 kernel: Dynamic Preempt: voluntary Jan 16 21:19:35.571373 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 21:19:35.571391 kernel: rcu: RCU event tracing is enabled. Jan 16 21:19:35.571405 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 16 21:19:35.571416 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 21:19:35.571426 kernel: Rude variant of Tasks RCU enabled. Jan 16 21:19:35.571445 kernel: Tracing variant of Tasks RCU enabled. Jan 16 21:19:35.571461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 21:19:35.571474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 16 21:19:35.571486 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:19:35.571498 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:19:35.571567 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 16 21:19:35.571580 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 16 21:19:35.571597 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 21:19:35.571610 kernel: Console: colour dummy device 80x25 Jan 16 21:19:35.571622 kernel: printk: legacy console [ttyS0] enabled Jan 16 21:19:35.571634 kernel: ACPI: Core revision 20240827 Jan 16 21:19:35.571647 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 21:19:35.571659 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 21:19:35.571671 kernel: x2apic enabled Jan 16 21:19:35.571683 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 21:19:35.571698 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 16 21:19:35.571711 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 16 21:19:35.571723 kernel: kvm-guest: setup PV IPIs Jan 16 21:19:35.571735 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 21:19:35.571748 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 16 21:19:35.571760 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 16 21:19:35.571773 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 16 21:19:35.571788 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 16 21:19:35.571800 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 16 21:19:35.571813 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 21:19:35.571825 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 21:19:35.571837 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 16 21:19:35.571850 kernel: Speculative Store Bypass: Vulnerable Jan 16 21:19:35.571862 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 16 21:19:35.571879 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 16 21:19:35.571891 kernel: active return thunk: srso_alias_return_thunk Jan 16 21:19:35.571903 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 16 21:19:35.571915 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 16 21:19:35.571928 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 21:19:35.571940 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 21:19:35.571955 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 21:19:35.571967 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 21:19:35.571979 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 21:19:35.571992 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 16 21:19:35.572004 kernel: Freeing SMP alternatives memory: 32K Jan 16 21:19:35.572016 kernel: pid_max: default: 32768 minimum: 301 Jan 16 21:19:35.572028 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 21:19:35.572043 kernel: landlock: Up and running. Jan 16 21:19:35.572182 kernel: SELinux: Initializing. Jan 16 21:19:35.572199 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 21:19:35.572213 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 21:19:35.572226 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 16 21:19:35.572239 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 16 21:19:35.572253 kernel: signal: max sigframe size: 1776 Jan 16 21:19:35.572270 kernel: rcu: Hierarchical SRCU implementation. Jan 16 21:19:35.572284 kernel: rcu: Max phase no-delay instances is 400. Jan 16 21:19:35.572298 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 21:19:35.572309 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 21:19:35.572320 kernel: smp: Bringing up secondary CPUs ... Jan 16 21:19:35.572333 kernel: smpboot: x86: Booting SMP configuration: Jan 16 21:19:35.572346 kernel: .... node #0, CPUs: #1 #2 #3 Jan 16 21:19:35.572359 kernel: smp: Brought up 1 node, 4 CPUs Jan 16 21:19:35.572375 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 16 21:19:35.572388 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120816K reserved, 0K cma-reserved) Jan 16 21:19:35.572400 kernel: devtmpfs: initialized Jan 16 21:19:35.572413 kernel: x86/mm: Memory block size: 128MB Jan 16 21:19:35.572426 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 16 21:19:35.572438 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 16 21:19:35.572449 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 16 21:19:35.572466 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 16 21:19:35.572480 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 16 21:19:35.572492 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 16 21:19:35.572569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 21:19:35.572584 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 16 21:19:35.572595 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 21:19:35.572610 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 21:19:35.572621 kernel: audit: initializing netlink subsys (disabled) Jan 16 21:19:35.572635 kernel: audit: type=2000 audit(1768598359.649:1): state=initialized audit_enabled=0 res=1 Jan 16 21:19:35.572647 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 21:19:35.572660 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 21:19:35.572675 kernel: cpuidle: using governor menu Jan 16 21:19:35.572686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 21:19:35.572696 kernel: dca service started, version 1.12.1 Jan 16 21:19:35.572711 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 16 21:19:35.572722 kernel: PCI: Using configuration type 1 for base access Jan 16 21:19:35.572732 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 21:19:35.572743 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 21:19:35.572754 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 21:19:35.572768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 21:19:35.572780 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 21:19:35.572798 kernel: ACPI: Added _OSI(Module Device) Jan 16 21:19:35.572809 kernel: ACPI: Added _OSI(Processor Device) Jan 16 21:19:35.572819 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 21:19:35.572830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 21:19:35.572840 kernel: ACPI: Interpreter enabled Jan 16 21:19:35.572851 kernel: ACPI: PM: (supports S0 S3 S5) Jan 16 21:19:35.572861 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 21:19:35.572876 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 21:19:35.572889 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 21:19:35.572902 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 16 21:19:35.572915 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 21:19:35.573352 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 21:19:35.578277 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 16 21:19:35.578611 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 16 21:19:35.578632 kernel: PCI host bridge to bus 0000:00 Jan 16 21:19:35.578868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 21:19:35.579215 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 21:19:35.579427 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 21:19:35.582856 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 16 21:19:35.583196 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 16 21:19:35.583416 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 16 21:19:35.583714 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 21:19:35.583980 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 16 21:19:35.584345 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 16 21:19:35.584701 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 16 21:19:35.584957 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 16 21:19:35.585385 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 16 21:19:35.588755 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 21:19:35.589002 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Jan 16 21:19:35.589398 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 16 21:19:35.589726 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 16 21:19:35.589978 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 16 21:19:35.590328 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 16 21:19:35.590647 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 16 21:19:35.590936 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 16 21:19:35.591372 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 16 21:19:35.603623 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 16 21:19:35.603876 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 16 21:19:35.604720 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 16 21:19:35.604988 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 16 21:19:35.605757 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 16 21:19:35.606014 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 16 21:19:35.606640 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 16 21:19:35.606906 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 16 21:19:35.607272 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Jan 16 21:19:35.607640 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 16 21:19:35.607872 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 16 21:19:35.608248 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 16 21:19:35.608702 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 16 21:19:35.608964 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 16 21:19:35.608983 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 21:19:35.608996 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 21:19:35.609017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 21:19:35.609028 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 21:19:35.609042 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 16 21:19:35.609151 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 16 21:19:35.609170 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 16 21:19:35.609182 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 16 21:19:35.609193 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 16 21:19:35.609208 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 16 21:19:35.609225 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 16 21:19:35.609238 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 16 21:19:35.609251 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 16 21:19:35.609262 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 16 21:19:35.609276 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 16 21:19:35.609289 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 16 21:19:35.609300 kernel: iommu: Default domain type: Translated Jan 16 21:19:35.609318 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 21:19:35.609329 kernel: efivars: Registered efivars operations Jan 16 21:19:35.609343 kernel: PCI: Using ACPI for IRQ routing Jan 16 21:19:35.609356 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 21:19:35.609368 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 16 21:19:35.609382 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 16 21:19:35.609394 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 16 21:19:35.609411 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 16 21:19:35.609423 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 16 21:19:35.609434 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 16 21:19:35.609448 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 16 21:19:35.609460 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 16 21:19:35.609776 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 16 21:19:35.610033 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 16 21:19:35.610381 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 21:19:35.610400 kernel: vgaarb: loaded Jan 16 21:19:35.610413 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 21:19:35.610427 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 21:19:35.610438 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 21:19:35.610451 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 21:19:35.610464 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 21:19:35.610482 kernel: pnp: PnP ACPI init Jan 16 21:19:35.610930 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 16 21:19:35.610951 kernel: pnp: PnP ACPI: found 6 devices Jan 16 21:19:35.610965 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 21:19:35.610978 kernel: NET: Registered PF_INET protocol family Jan 16 21:19:35.610991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 21:19:35.611008 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 21:19:35.611043 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 21:19:35.611158 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 21:19:35.611172 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 21:19:35.611185 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 21:19:35.611197 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 21:19:35.611209 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 21:19:35.611225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 21:19:35.611237 kernel: NET: Registered PF_XDP protocol family Jan 16 21:19:35.611474 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 16 21:19:35.611764 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 16 21:19:35.611976 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 21:19:35.612645 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 21:19:35.612857 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 21:19:35.613156 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 16 21:19:35.613364 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 16 21:19:35.613630 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 16 21:19:35.613648 kernel: PCI: CLS 0 bytes, default 64 Jan 16 21:19:35.613661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 16 21:19:35.613677 kernel: Initialise system trusted keyrings Jan 16 21:19:35.613692 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 21:19:35.613704 kernel: Key type asymmetric registered Jan 16 21:19:35.613715 kernel: Asymmetric key parser 'x509' registered Jan 16 21:19:35.613727 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 21:19:35.613739 kernel: io scheduler mq-deadline registered Jan 16 21:19:35.613750 kernel: io scheduler kyber registered Jan 16 21:19:35.613762 kernel: io scheduler bfq registered Jan 16 21:19:35.613776 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 21:19:35.613789 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 16 21:19:35.613804 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 16 21:19:35.613816 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 16 21:19:35.613828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 21:19:35.613842 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 21:19:35.613854 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 21:19:35.613866 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 21:19:35.613877 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 21:19:35.614228 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 16 21:19:35.614856 kernel: rtc_cmos 00:04: registered as rtc0 Jan 16 21:19:35.615209 kernel: rtc_cmos 00:04: setting system clock to 2026-01-16T21:19:29 UTC (1768598369) Jan 16 21:19:35.615230 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 16 21:19:35.615473 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 16 21:19:35.615492 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 16 21:19:35.615564 kernel: efifb: probing for efifb Jan 16 21:19:35.615579 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 16 21:19:35.615591 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 16 21:19:35.615612 kernel: efifb: scrolling: redraw Jan 16 21:19:35.615624 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 16 21:19:35.615636 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 21:19:35.615650 kernel: fb0: EFI VGA frame buffer device Jan 16 21:19:35.615662 kernel: pstore: Using crash dump compression: deflate Jan 16 21:19:35.615677 kernel: pstore: Registered efi_pstore as persistent store backend Jan 16 21:19:35.615689 kernel: NET: Registered PF_INET6 protocol family Jan 16 21:19:35.615705 kernel: Segment Routing with IPv6 Jan 16 21:19:35.615720 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 21:19:35.615731 kernel: NET: Registered PF_PACKET protocol family Jan 16 21:19:35.615746 kernel: Key type dns_resolver registered Jan 16 21:19:35.615758 kernel: IPI shorthand broadcast: enabled Jan 16 21:19:35.615771 kernel: sched_clock: Marking stable (6680048452, 3557866336)->(11760532604, -1522617816) Jan 16 21:19:35.615785 kernel: registered taskstats version 1 Jan 16 21:19:35.615797 kernel: Loading compiled-in X.509 certificates Jan 16 21:19:35.615815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a9591db9912320a48a0589d0293fff3e535b90df' Jan 16 21:19:35.615828 kernel: Demotion targets for Node 0: null Jan 16 21:19:35.615842 kernel: Key type .fscrypt registered Jan 16 21:19:35.615854 kernel: Key type fscrypt-provisioning registered Jan 16 21:19:35.615867 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 21:19:35.615881 kernel: ima: Allocated hash algorithm: sha1 Jan 16 21:19:35.615896 kernel: ima: No architecture policies found Jan 16 21:19:35.615911 kernel: clk: Disabling unused clocks Jan 16 21:19:35.615922 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 16 21:19:35.615936 kernel: Write protecting the kernel read-only data: 47104k Jan 16 21:19:35.615949 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 16 21:19:35.615961 kernel: Run /init as init process Jan 16 21:19:35.615976 kernel: with arguments: Jan 16 21:19:35.615988 kernel: /init Jan 16 21:19:35.616006 kernel: with environment: Jan 16 21:19:35.616017 kernel: HOME=/ Jan 16 21:19:35.616029 kernel: TERM=linux Jan 16 21:19:35.616043 kernel: SCSI subsystem initialized Jan 16 21:19:35.616149 kernel: libata version 3.00 loaded. Jan 16 21:19:35.616408 kernel: ahci 0000:00:1f.2: version 3.0 Jan 16 21:19:35.616431 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 16 21:19:35.616743 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 16 21:19:35.616996 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 16 21:19:35.617389 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 16 21:19:35.617861 kernel: scsi host0: ahci Jan 16 21:19:35.618274 kernel: scsi host1: ahci Jan 16 21:19:35.618664 kernel: scsi host2: ahci Jan 16 21:19:35.618925 kernel: scsi host3: ahci Jan 16 21:19:35.619356 kernel: scsi host4: ahci Jan 16 21:19:35.619709 kernel: scsi host5: ahci Jan 16 21:19:35.619733 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 16 21:19:35.619747 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 16 21:19:35.619766 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 16 21:19:35.619781 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 16 21:19:35.619793 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 16 21:19:35.619808 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 16 21:19:35.619821 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 16 21:19:35.619833 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 16 21:19:35.619848 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 16 21:19:35.619863 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 16 21:19:35.619877 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 16 21:19:35.619890 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 16 21:19:35.619901 kernel: ata3.00: LPM support broken, forcing max_power Jan 16 21:19:35.619917 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 16 21:19:35.619929 kernel: ata3.00: applying bridge limits Jan 16 21:19:35.619943 kernel: ata3.00: LPM support broken, forcing max_power Jan 16 21:19:35.619956 kernel: ata3.00: configured for UDMA/100 Jan 16 21:19:35.620624 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 21:19:35.620904 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 16 21:19:35.621294 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 16 21:19:35.621316 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 21:19:35.621331 kernel: GPT:16515071 != 27000831 Jan 16 21:19:35.621349 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 21:19:35.621361 kernel: GPT:16515071 != 27000831 Jan 16 21:19:35.621373 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 21:19:35.621385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 21:19:35.621704 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 16 21:19:35.621722 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 21:19:35.622259 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 16 21:19:35.622283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 21:19:35.622295 kernel: device-mapper: uevent: version 1.0.3 Jan 16 21:19:35.622307 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 21:19:35.622319 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 16 21:19:35.622331 kernel: raid6: avx2x4 gen() 25167 MB/s Jan 16 21:19:35.622343 kernel: raid6: avx2x2 gen() 23037 MB/s Jan 16 21:19:35.622355 kernel: raid6: avx2x1 gen() 14208 MB/s Jan 16 21:19:35.622370 kernel: raid6: using algorithm avx2x4 gen() 25167 MB/s Jan 16 21:19:35.622381 kernel: raid6: .... xor() 6063 MB/s, rmw enabled Jan 16 21:19:35.622394 kernel: raid6: using avx2x2 recovery algorithm Jan 16 21:19:35.622406 kernel: xor: automatically using best checksumming function avx Jan 16 21:19:35.622417 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 21:19:35.622429 kernel: BTRFS: device fsid a5f82c06-1ff1-43b3-a650-214802f1359b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Jan 16 21:19:35.622441 kernel: BTRFS info (device dm-0): first mount of filesystem a5f82c06-1ff1-43b3-a650-214802f1359b Jan 16 21:19:35.622456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:19:35.622468 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 21:19:35.622480 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 21:19:35.622492 kernel: loop: module loaded Jan 16 21:19:35.622559 kernel: loop0: detected capacity change from 0 to 100536 Jan 16 21:19:35.622578 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 21:19:35.622591 systemd[1]: Successfully made /usr/ read-only. Jan 16 21:19:35.622613 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:19:35.622627 systemd[1]: Detected virtualization kvm. Jan 16 21:19:35.622641 systemd[1]: Detected architecture x86-64. Jan 16 21:19:35.622653 systemd[1]: Running in initrd. Jan 16 21:19:35.622667 systemd[1]: No hostname configured, using default hostname. Jan 16 21:19:35.622687 systemd[1]: Hostname set to . Jan 16 21:19:35.622702 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:19:35.622717 systemd[1]: Queued start job for default target initrd.target. Jan 16 21:19:35.622730 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:19:35.622743 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:19:35.622758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:19:35.622771 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 21:19:35.622791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:19:35.622804 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 21:19:35.622819 kernel: hrtimer: interrupt took 3563683 ns Jan 16 21:19:35.622832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 21:19:35.622847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:19:35.622861 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:19:35.622880 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:19:35.622893 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:19:35.622906 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:19:35.622921 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:19:35.622933 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:19:35.622948 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:19:35.622961 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:19:35.622979 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:19:35.622992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 21:19:35.623006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 21:19:35.623021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:19:35.623033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:19:35.623049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:19:35.623158 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:19:35.623179 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 21:19:35.623192 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 21:19:35.623206 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:19:35.623220 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 21:19:35.623233 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 21:19:35.623249 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 21:19:35.623265 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:19:35.623280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:19:35.623294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:19:35.623308 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 21:19:35.623326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:19:35.623339 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 21:19:35.623466 systemd-journald[324]: Collecting audit messages is enabled. Jan 16 21:19:35.623566 kernel: audit: type=1130 audit(1768598375.541:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:35.623584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 21:19:35.623602 systemd-journald[324]: Journal started Jan 16 21:19:35.623629 systemd-journald[324]: Runtime Journal (/run/log/journal/c5b41d328394450eb4757c4c39566e12) is 6M, max 48M, 42M free. Jan 16 21:19:35.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:35.642677 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:19:35.652781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:19:35.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:35.680995 kernel: audit: type=1130 audit(1768598375.636:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.076651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 21:19:36.168994 kernel: audit: type=1130 audit(1768598376.106:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.167485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:19:36.212721 kernel: audit: type=1130 audit(1768598376.186:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.197285 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 21:19:36.268594 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 21:19:36.289329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:19:36.366957 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:19:36.458349 kernel: audit: type=1130 audit(1768598376.377:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.497995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:19:36.538294 kernel: audit: type=1130 audit(1768598376.509:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.538589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:19:36.623032 kernel: audit: type=1130 audit(1768598376.553:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.557989 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 21:19:36.714419 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 21:19:36.714719 dracut-cmdline[354]: dracut-109 Jan 16 21:19:36.751861 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e880b5400e832e1de59b993d9ba6b86a9089175f10b4985da8b7b47cc8c74099 Jan 16 21:19:36.823644 kernel: Bridge firewalling registered Jan 16 21:19:36.831763 systemd-modules-load[325]: Inserted module 'br_netfilter' Jan 16 21:19:36.843630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:19:36.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:36.885155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:19:36.932023 kernel: audit: type=1130 audit(1768598376.871:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:37.033997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:19:37.088438 kernel: audit: type=1130 audit(1768598377.050:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:37.088588 kernel: audit: type=1334 audit(1768598377.051:11): prog-id=6 op=LOAD Jan 16 21:19:37.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:37.051000 audit: BPF prog-id=6 op=LOAD Jan 16 21:19:37.058323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:19:37.334911 systemd-resolved[412]: Positive Trust Anchors: Jan 16 21:19:37.335050 systemd-resolved[412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:19:37.335205 systemd-resolved[412]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:19:37.335298 systemd-resolved[412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:19:37.489663 systemd-resolved[412]: Defaulting to hostname 'linux'. Jan 16 21:19:37.500160 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:19:37.513631 kernel: Loading iSCSI transport class v2.0-870. Jan 16 21:19:37.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:37.534587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:19:37.650348 kernel: iscsi: registered transport (tcp) Jan 16 21:19:37.733729 kernel: iscsi: registered transport (qla4xxx) Jan 16 21:19:37.733820 kernel: QLogic iSCSI HBA Driver Jan 16 21:19:37.891031 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:19:38.012885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:19:38.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:38.037590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:19:38.331241 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 21:19:38.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:38.353435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 21:19:38.410015 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 21:19:38.713602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:19:38.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:38.775000 audit: BPF prog-id=7 op=LOAD Jan 16 21:19:38.775000 audit: BPF prog-id=8 op=LOAD Jan 16 21:19:38.789574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:19:39.082816 systemd-udevd[591]: Using default interface naming scheme 'v257'. Jan 16 21:19:39.199308 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:19:39.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:39.268036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 21:19:39.498828 dracut-pre-trigger[651]: rd.md=0: removing MD RAID activation Jan 16 21:19:39.706182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:19:39.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:39.727000 audit: BPF prog-id=9 op=LOAD Jan 16 21:19:39.736385 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:19:39.788448 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:19:39.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:39.839365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:19:40.040479 systemd-networkd[727]: lo: Link UP Jan 16 21:19:40.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.040584 systemd-networkd[727]: lo: Gained carrier Jan 16 21:19:40.042871 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:19:40.068503 systemd[1]: Reached target network.target - Network. Jan 16 21:19:40.257615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:19:40.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.338640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 21:19:40.567652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 21:19:40.611986 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 21:19:40.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.663187 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 16 21:19:40.663796 kernel: audit: type=1130 audit(1768598380.645:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.678595 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 21:19:40.722996 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 21:19:40.773332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:19:40.841359 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:19:40.856997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:19:40.859377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:19:41.030458 kernel: audit: type=1131 audit(1768598380.960:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:40.913818 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 21:19:41.070489 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 21:19:40.928517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 21:19:40.959938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:19:40.960413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:19:40.960603 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:19:41.009738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:19:41.237729 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:19:41.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:41.272935 disk-uuid[783]: Primary Header is updated. Jan 16 21:19:41.272935 disk-uuid[783]: Secondary Entries is updated. Jan 16 21:19:41.272935 disk-uuid[783]: Secondary Header is updated. Jan 16 21:19:41.283036 kernel: audit: type=1130 audit(1768598381.259:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:41.366861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:19:41.447288 kernel: audit: type=1130 audit(1768598381.379:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:41.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:41.489755 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 21:19:41.503217 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:19:41.515399 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:19:41.518790 systemd-networkd[727]: eth0: Link UP Jan 16 21:19:41.525907 systemd-networkd[727]: eth0: Gained carrier Jan 16 21:19:41.525923 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:19:41.608259 kernel: AES CTR mode by8 optimization enabled Jan 16 21:19:41.616271 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 16 21:19:42.526653 disk-uuid[786]: Warning: The kernel is still using the old partition table. Jan 16 21:19:42.526653 disk-uuid[786]: The new table will be used at the next reboot or after you Jan 16 21:19:42.526653 disk-uuid[786]: run partprobe(8) or kpartx(8) Jan 16 21:19:42.526653 disk-uuid[786]: The operation has completed successfully. Jan 16 21:19:42.596630 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 21:19:42.659217 kernel: audit: type=1130 audit(1768598382.601:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.659260 kernel: audit: type=1131 audit(1768598382.601:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.597007 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 21:19:42.661384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 21:19:42.799324 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Jan 16 21:19:42.813049 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:19:42.813201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:19:42.860492 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:19:42.860617 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:19:42.891391 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:19:42.907518 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 21:19:42.941671 kernel: audit: type=1130 audit(1768598382.912:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:42.915290 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 21:19:43.462338 systemd-networkd[727]: eth0: Gained IPv6LL Jan 16 21:19:46.213505 ignition[880]: Ignition 2.24.0 Jan 16 21:19:46.213552 ignition[880]: Stage: fetch-offline Jan 16 21:19:46.213874 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:46.213895 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:46.215878 ignition[880]: parsed url from cmdline: "" Jan 16 21:19:46.215885 ignition[880]: no config URL provided Jan 16 21:19:46.218248 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 21:19:46.220212 ignition[880]: no config at "/usr/lib/ignition/user.ign" Jan 16 21:19:46.220350 ignition[880]: op(1): [started] loading QEMU firmware config module Jan 16 21:19:46.220358 ignition[880]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 16 21:19:46.596027 ignition[880]: op(1): [finished] loading QEMU firmware config module Jan 16 21:19:47.388258 ignition[880]: parsing config with SHA512: eac98d8a856da81cbb4fe8246893f8c1038f5f9829fcfdfec926e915bdfc2d185d8f6750e097e2e41c2c0630525a92000dd758116c3e34e14738ea429577276c Jan 16 21:19:47.496849 unknown[880]: fetched base config from "system" Jan 16 21:19:47.498666 unknown[880]: fetched user config from "qemu" Jan 16 21:19:47.501943 ignition[880]: fetch-offline: fetch-offline passed Jan 16 21:19:47.513912 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:19:47.502264 ignition[880]: Ignition finished successfully Jan 16 21:19:47.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:47.572001 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 16 21:19:47.612045 kernel: audit: type=1130 audit(1768598387.564:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:47.579017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 21:19:47.909404 ignition[890]: Ignition 2.24.0 Jan 16 21:19:47.909417 ignition[890]: Stage: kargs Jan 16 21:19:47.912411 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:47.912427 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:47.960631 ignition[890]: kargs: kargs passed Jan 16 21:19:47.960999 ignition[890]: Ignition finished successfully Jan 16 21:19:47.991817 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 21:19:48.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:48.026319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 21:19:48.052428 kernel: audit: type=1130 audit(1768598388.017:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:48.407827 ignition[897]: Ignition 2.24.0 Jan 16 21:19:48.407885 ignition[897]: Stage: disks Jan 16 21:19:48.408285 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:48.474931 kernel: audit: type=1130 audit(1768598388.423:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:48.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:48.418466 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 21:19:48.408298 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:48.427962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 21:19:48.410322 ignition[897]: disks: disks passed Jan 16 21:19:48.476164 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 21:19:48.410387 ignition[897]: Ignition finished successfully Jan 16 21:19:48.510454 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:19:48.519741 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:19:48.542468 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:19:48.570355 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 21:19:48.924748 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 16 21:19:48.974689 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 21:19:48.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:49.005486 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 21:19:49.038261 kernel: audit: type=1130 audit(1768598388.988:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:49.483755 kernel: EXT4-fs (vda9): mounted filesystem ec5ae8d3-548b-4a34-bd68-b1a953fcffb6 r/w with ordered data mode. Quota mode: none. Jan 16 21:19:49.485517 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 21:19:49.495196 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 21:19:49.525627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:19:49.535010 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 21:19:49.550932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 21:19:49.551214 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 21:19:49.551273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:19:49.628708 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 21:19:49.667211 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 21:19:49.733249 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Jan 16 21:19:49.757886 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:19:49.758129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:19:49.852441 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:19:49.852889 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:19:49.867200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:19:51.014263 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 21:19:51.084321 kernel: audit: type=1130 audit(1768598391.029:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.077464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 21:19:51.119831 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 21:19:51.195486 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 21:19:51.223777 kernel: BTRFS info (device vda6): last unmount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:19:51.374974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 21:19:51.444700 kernel: audit: type=1130 audit(1768598391.375:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.832559 ignition[1013]: INFO : Ignition 2.24.0 Jan 16 21:19:51.832559 ignition[1013]: INFO : Stage: mount Jan 16 21:19:51.860046 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:51.860046 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:51.860046 ignition[1013]: INFO : mount: mount passed Jan 16 21:19:51.860046 ignition[1013]: INFO : Ignition finished successfully Jan 16 21:19:51.958448 kernel: audit: type=1130 audit(1768598391.893:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:51.874830 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 21:19:51.903539 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 21:19:52.012330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 21:19:52.084178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Jan 16 21:19:52.096639 kernel: BTRFS info (device vda6): first mount of filesystem 984b7cbf-e15c-4ac8-8ab0-1fb2c55516eb Jan 16 21:19:52.096705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 21:19:52.139005 kernel: BTRFS info (device vda6): turning on async discard Jan 16 21:19:52.139183 kernel: BTRFS info (device vda6): enabling free space tree Jan 16 21:19:52.149780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 21:19:52.338735 ignition[1043]: INFO : Ignition 2.24.0 Jan 16 21:19:52.338735 ignition[1043]: INFO : Stage: files Jan 16 21:19:52.338735 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:52.364968 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:52.364968 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jan 16 21:19:52.364968 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 21:19:52.364968 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 21:19:52.414444 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 21:19:52.414444 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 21:19:52.414444 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 21:19:52.414444 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:19:52.414444 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 16 21:19:52.393799 unknown[1043]: wrote ssh authorized keys file for user: core Jan 16 21:19:52.627398 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 21:19:52.816270 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 16 21:19:52.816270 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:19:52.853913 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 16 21:19:53.334994 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 21:19:54.103460 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 16 21:19:54.103460 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 21:19:54.131521 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 21:19:54.161725 ignition[1043]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 16 21:19:54.339968 ignition[1043]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 16 21:19:54.365003 ignition[1043]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 16 21:19:54.365003 ignition[1043]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 16 21:19:54.365003 ignition[1043]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 16 21:19:54.365003 ignition[1043]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 21:19:54.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.460143 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:19:54.460143 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 21:19:54.460143 ignition[1043]: INFO : files: files passed Jan 16 21:19:54.460143 ignition[1043]: INFO : Ignition finished successfully Jan 16 21:19:54.538454 kernel: audit: type=1130 audit(1768598394.411:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.390943 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 21:19:54.414556 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 21:19:54.444337 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 21:19:54.592269 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 21:19:54.592517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 21:19:54.652769 kernel: audit: type=1130 audit(1768598394.608:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.652879 kernel: audit: type=1131 audit(1768598394.608:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.654035 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Jan 16 21:19:54.681170 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:19:54.681170 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:19:54.718028 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 21:19:54.755004 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:19:54.815292 kernel: audit: type=1130 audit(1768598394.763:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:54.763921 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 21:19:54.836965 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 21:19:55.029551 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 21:19:55.029889 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 21:19:55.057505 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 21:19:55.132721 kernel: audit: type=1130 audit(1768598395.056:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.132835 kernel: audit: type=1131 audit(1768598395.056:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.116222 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 21:19:55.126332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 21:19:55.128407 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 21:19:55.279244 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:19:55.357517 kernel: audit: type=1130 audit(1768598395.292:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.299893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 21:19:55.409992 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 21:19:55.412413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:19:55.427191 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:19:55.462923 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 21:19:55.485444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 21:19:55.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.485707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 21:19:55.532537 kernel: audit: type=1131 audit(1768598395.499:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.528041 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 21:19:55.549860 systemd[1]: Stopped target basic.target - Basic System. Jan 16 21:19:55.568503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 21:19:55.579021 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 21:19:55.590197 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 21:19:55.590383 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 21:19:55.635338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 21:19:55.642169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 21:19:55.669188 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 21:19:55.678936 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 21:19:55.739878 kernel: audit: type=1131 audit(1768598395.712:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.697004 systemd[1]: Stopped target swap.target - Swaps. Jan 16 21:19:55.702050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 21:19:55.702371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 21:19:55.740432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:19:55.753915 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:19:55.770183 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 21:19:55.771274 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:19:55.783014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 21:19:55.849986 kernel: audit: type=1131 audit(1768598395.800:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:55.783331 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 21:19:55.803142 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 21:19:55.804294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 21:19:55.809537 systemd[1]: Stopped target paths.target - Path Units. Jan 16 21:19:55.827868 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 21:19:55.831399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:19:55.862226 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 21:19:55.918208 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 21:19:55.924232 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 21:19:55.924437 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 21:19:55.945216 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 21:19:55.945368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 21:19:55.984455 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 21:19:55.984586 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:19:56.003994 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 21:19:56.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.005760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 21:19:56.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.033952 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 21:19:56.034261 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 21:19:56.048963 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 21:19:56.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.055712 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 21:19:56.055891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:19:56.083806 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 21:19:56.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.105519 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 21:19:56.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.108314 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:19:56.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.117391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 21:19:56.117569 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:19:56.127416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 21:19:56.127695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 21:19:56.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.172190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 21:19:56.172365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 21:19:56.205378 ignition[1100]: INFO : Ignition 2.24.0 Jan 16 21:19:56.205378 ignition[1100]: INFO : Stage: umount Jan 16 21:19:56.205378 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 21:19:56.205378 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 16 21:19:56.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.234330 ignition[1100]: INFO : umount: umount passed Jan 16 21:19:56.234330 ignition[1100]: INFO : Ignition finished successfully Jan 16 21:19:56.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.213511 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 21:19:56.213730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 21:19:56.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.219538 systemd[1]: Stopped target network.target - Network. Jan 16 21:19:56.230385 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 21:19:56.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.230499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 21:19:56.236019 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 21:19:56.236334 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 21:19:56.266729 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 21:19:56.266821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 21:19:56.281866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 21:19:56.281944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 21:19:56.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.288368 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 21:19:56.301744 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 21:19:56.363000 audit: BPF prog-id=6 op=UNLOAD Jan 16 21:19:56.303734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 21:19:56.334871 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 21:19:56.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.335174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 21:19:56.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.370510 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 21:19:56.372172 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 21:19:56.378460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 21:19:56.378536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 21:19:56.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.407445 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 21:19:56.408995 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 21:19:56.438559 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 21:19:56.443897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 21:19:56.443993 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:19:56.457228 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 21:19:56.476000 audit: BPF prog-id=9 op=UNLOAD Jan 16 21:19:56.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.478886 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 21:19:56.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.478969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 21:19:56.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.483980 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 21:19:56.484042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:19:56.491012 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 21:19:56.491187 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 21:19:56.509217 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:19:56.566889 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 21:19:56.577975 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:19:56.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.580891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 21:19:56.581415 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 21:19:56.612290 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 21:19:56.612360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:19:56.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.625894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 21:19:56.626044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 21:19:56.671207 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 21:19:56.671308 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 21:19:56.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.714610 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 21:19:56.714812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 21:19:56.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.739899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 21:19:56.760772 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 21:19:56.765437 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:19:56.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.794894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 21:19:56.795976 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:19:56.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.823971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 21:19:56.824601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:19:56.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.866590 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 21:19:56.866907 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 21:19:56.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.931025 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 21:19:56.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:56.931428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 21:19:56.949253 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 21:19:56.967417 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 21:19:57.024268 systemd[1]: Switching root. Jan 16 21:19:57.091230 systemd-journald[324]: Journal stopped Jan 16 21:20:00.137980 systemd-journald[324]: Received SIGTERM from PID 1 (systemd). Jan 16 21:20:00.138199 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 21:20:00.138232 kernel: SELinux: policy capability open_perms=1 Jan 16 21:20:00.138406 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 21:20:00.138427 kernel: SELinux: policy capability always_check_network=0 Jan 16 21:20:00.138445 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 21:20:00.138468 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 21:20:00.138486 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 21:20:00.138509 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 21:20:00.138528 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 21:20:00.138600 systemd[1]: Successfully loaded SELinux policy in 174.548ms. Jan 16 21:20:00.138629 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.379ms. Jan 16 21:20:00.138712 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 21:20:00.138731 systemd[1]: Detected virtualization kvm. Jan 16 21:20:00.138751 systemd[1]: Detected architecture x86-64. Jan 16 21:20:00.138776 systemd[1]: Detected first boot. Jan 16 21:20:00.138793 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 21:20:00.138871 zram_generator::config[1144]: No configuration found. Jan 16 21:20:00.138897 kernel: Guest personality initialized and is inactive Jan 16 21:20:00.138916 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 16 21:20:00.138931 kernel: Initialized host personality Jan 16 21:20:00.138947 kernel: NET: Registered PF_VSOCK protocol family Jan 16 21:20:00.138967 systemd[1]: Populated /etc with preset unit settings. Jan 16 21:20:00.138983 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 21:20:00.139158 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 21:20:00.139182 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 21:20:00.139207 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 21:20:00.139226 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 21:20:00.139244 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 21:20:00.139261 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 21:20:00.139280 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 21:20:00.139364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 21:20:00.139383 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 21:20:00.139400 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 21:20:00.139419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 21:20:00.139702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 21:20:00.139726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 21:20:00.139748 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 21:20:00.139766 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 21:20:00.139783 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 21:20:00.139803 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 21:20:00.139877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 21:20:00.139899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 21:20:00.139920 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 21:20:00.139940 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 21:20:00.139959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 21:20:00.139978 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 21:20:00.139997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 21:20:00.140169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 21:20:00.140191 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 21:20:00.140211 systemd[1]: Reached target slices.target - Slice Units. Jan 16 21:20:00.140230 systemd[1]: Reached target swap.target - Swaps. Jan 16 21:20:00.140248 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 21:20:00.140266 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 21:20:00.140285 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 21:20:00.140356 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 21:20:00.140376 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 21:20:00.140401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 21:20:00.140419 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 21:20:00.140437 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 21:20:00.140455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 21:20:00.140475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 21:20:00.140537 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 21:20:00.140556 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 21:20:00.140580 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 21:20:00.140598 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 21:20:00.140617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:20:00.140695 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 21:20:00.140716 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 21:20:00.140795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 21:20:00.140818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 21:20:00.140835 systemd[1]: Reached target machines.target - Containers. Jan 16 21:20:00.140851 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 21:20:00.140869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:20:00.140887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 21:20:00.140905 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 21:20:00.140979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:20:00.140998 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:20:00.141016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:20:00.141035 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 21:20:00.141053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:20:00.141164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 21:20:00.141191 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 21:20:00.141210 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 21:20:00.141229 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 16 21:20:00.141247 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 21:20:00.141267 kernel: audit: type=1131 audit(1768598399.927:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.141285 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 21:20:00.141305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:20:00.141330 kernel: audit: type=1131 audit(1768598399.967:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.141348 kernel: audit: type=1334 audit(1768598399.985:100): prog-id=14 op=UNLOAD Jan 16 21:20:00.141365 kernel: audit: type=1334 audit(1768598399.985:101): prog-id=13 op=UNLOAD Jan 16 21:20:00.141433 kernel: audit: type=1334 audit(1768598400.004:102): prog-id=15 op=LOAD Jan 16 21:20:00.141452 kernel: audit: type=1334 audit(1768598400.019:103): prog-id=16 op=LOAD Jan 16 21:20:00.141510 kernel: audit: type=1334 audit(1768598400.030:104): prog-id=17 op=LOAD Jan 16 21:20:00.141527 kernel: fuse: init (API version 7.41) Jan 16 21:20:00.141545 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 21:20:00.141562 kernel: ACPI: bus type drm_connector registered Jan 16 21:20:00.141580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 21:20:00.141599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 21:20:00.141620 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 21:20:00.141725 systemd-journald[1230]: Collecting audit messages is enabled. Jan 16 21:20:00.141762 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 21:20:00.141782 kernel: audit: type=1305 audit(1768598400.134:105): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 21:20:00.141799 systemd-journald[1230]: Journal started Jan 16 21:20:00.141887 systemd-journald[1230]: Runtime Journal (/run/log/journal/c5b41d328394450eb4757c4c39566e12) is 6M, max 48M, 42M free. Jan 16 21:19:59.387000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 16 21:19:59.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:59.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:19:59.985000 audit: BPF prog-id=14 op=UNLOAD Jan 16 21:19:59.985000 audit: BPF prog-id=13 op=UNLOAD Jan 16 21:20:00.004000 audit: BPF prog-id=15 op=LOAD Jan 16 21:20:00.019000 audit: BPF prog-id=16 op=LOAD Jan 16 21:20:00.030000 audit: BPF prog-id=17 op=LOAD Jan 16 21:20:00.134000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 21:19:58.805724 systemd[1]: Queued start job for default target multi-user.target. Jan 16 21:19:58.832703 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 21:19:58.834453 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 21:19:58.835322 systemd[1]: systemd-journald.service: Consumed 2.701s CPU time. Jan 16 21:20:00.134000 audit[1230]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc4be07ba0 a2=4000 a3=0 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.134000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 21:20:00.186212 kernel: audit: type=1300 audit(1768598400.134:105): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc4be07ba0 a2=4000 a3=0 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:00.186290 kernel: audit: type=1327 audit(1768598400.134:105): proctitle="/usr/lib/systemd/systemd-journald" Jan 16 21:20:00.186319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 21:20:00.218007 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:20:00.240284 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 21:20:00.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.245179 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 21:20:00.255452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 21:20:00.261758 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 21:20:00.268241 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 21:20:00.275551 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 21:20:00.285812 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 21:20:00.295014 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 21:20:00.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.304918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 21:20:00.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.316816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 21:20:00.317315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 21:20:00.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.327598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:20:00.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.327941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:20:00.337366 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:20:00.339006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:20:00.352403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:20:00.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.355155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:20:00.365020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 21:20:00.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.365497 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 21:20:00.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.376754 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:20:00.378293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:20:00.388462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 21:20:00.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.403394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 21:20:00.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.423840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 21:20:00.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.436270 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 21:20:00.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.448985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 21:20:00.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.496497 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 21:20:00.503182 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 21:20:00.515478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 21:20:00.528919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 21:20:00.535933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 21:20:00.536027 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 21:20:00.548578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 21:20:00.566859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:20:00.567242 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:20:00.575629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 21:20:00.583777 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 21:20:00.593917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:20:00.598873 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 21:20:00.605617 systemd-journald[1230]: Time spent on flushing to /var/log/journal/c5b41d328394450eb4757c4c39566e12 is 41.509ms for 1203 entries. Jan 16 21:20:00.605617 systemd-journald[1230]: System Journal (/var/log/journal/c5b41d328394450eb4757c4c39566e12) is 8M, max 163.5M, 155.5M free. Jan 16 21:20:00.673368 systemd-journald[1230]: Received client request to flush runtime journal. Jan 16 21:20:00.614702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:20:00.616542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 21:20:00.633724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 21:20:00.648368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 21:20:00.659771 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 21:20:00.667200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 21:20:00.679988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 21:20:00.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.705183 kernel: loop1: detected capacity change from 0 to 50784 Jan 16 21:20:00.713859 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 21:20:00.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.723934 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 21:20:00.743324 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 21:20:00.754891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 21:20:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.775172 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 21:20:00.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.789000 audit: BPF prog-id=18 op=LOAD Jan 16 21:20:00.790000 audit: BPF prog-id=19 op=LOAD Jan 16 21:20:00.790000 audit: BPF prog-id=20 op=LOAD Jan 16 21:20:00.792815 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 21:20:00.809000 audit: BPF prog-id=21 op=LOAD Jan 16 21:20:00.818271 kernel: loop2: detected capacity change from 0 to 111560 Jan 16 21:20:00.813541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 21:20:00.826563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 21:20:00.843000 audit: BPF prog-id=22 op=LOAD Jan 16 21:20:00.849000 audit: BPF prog-id=23 op=LOAD Jan 16 21:20:00.849000 audit: BPF prog-id=24 op=LOAD Jan 16 21:20:00.852459 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 21:20:00.861000 audit: BPF prog-id=25 op=LOAD Jan 16 21:20:00.862000 audit: BPF prog-id=26 op=LOAD Jan 16 21:20:00.862000 audit: BPF prog-id=27 op=LOAD Jan 16 21:20:00.864548 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 21:20:00.881947 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 21:20:00.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.909740 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jan 16 21:20:00.909816 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Jan 16 21:20:00.927238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 21:20:00.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:00.958219 kernel: loop3: detected capacity change from 0 to 224512 Jan 16 21:20:00.973586 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 21:20:00.981469 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 21:20:01.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.003041 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 21:20:01.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.036225 kernel: loop4: detected capacity change from 0 to 50784 Jan 16 21:20:01.101255 kernel: loop5: detected capacity change from 0 to 111560 Jan 16 21:20:01.133950 systemd-oomd[1280]: No swap; memory pressure usage will be degraded Jan 16 21:20:01.137513 kernel: loop6: detected capacity change from 0 to 224512 Jan 16 21:20:01.135316 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 21:20:01.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.162331 systemd-resolved[1281]: Positive Trust Anchors: Jan 16 21:20:01.162624 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 21:20:01.162757 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 21:20:01.162823 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 21:20:01.180178 systemd-resolved[1281]: Defaulting to hostname 'linux'. Jan 16 21:20:01.182363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 21:20:01.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.188839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 21:20:01.195310 (sd-merge)[1304]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 16 21:20:01.203733 (sd-merge)[1304]: Merged extensions into '/usr'. Jan 16 21:20:01.212227 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 21:20:01.212337 systemd[1]: Reloading... Jan 16 21:20:01.344159 zram_generator::config[1338]: No configuration found. Jan 16 21:20:01.679019 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 21:20:01.681241 systemd[1]: Reloading finished in 468 ms. Jan 16 21:20:01.730950 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 21:20:01.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.748894 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 21:20:01.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:01.792320 systemd[1]: Starting ensure-sysext.service... Jan 16 21:20:01.803311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 21:20:01.818000 audit: BPF prog-id=8 op=UNLOAD Jan 16 21:20:01.818000 audit: BPF prog-id=7 op=UNLOAD Jan 16 21:20:01.825000 audit: BPF prog-id=28 op=LOAD Jan 16 21:20:01.825000 audit: BPF prog-id=29 op=LOAD Jan 16 21:20:01.828218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 21:20:01.844000 audit: BPF prog-id=30 op=LOAD Jan 16 21:20:01.844000 audit: BPF prog-id=22 op=UNLOAD Jan 16 21:20:01.844000 audit: BPF prog-id=31 op=LOAD Jan 16 21:20:01.845000 audit: BPF prog-id=32 op=LOAD Jan 16 21:20:01.845000 audit: BPF prog-id=23 op=UNLOAD Jan 16 21:20:01.845000 audit: BPF prog-id=24 op=UNLOAD Jan 16 21:20:01.862000 audit: BPF prog-id=33 op=LOAD Jan 16 21:20:01.862000 audit: BPF prog-id=18 op=UNLOAD Jan 16 21:20:01.864000 audit: BPF prog-id=34 op=LOAD Jan 16 21:20:01.864000 audit: BPF prog-id=35 op=LOAD Jan 16 21:20:01.864000 audit: BPF prog-id=19 op=UNLOAD Jan 16 21:20:01.867000 audit: BPF prog-id=20 op=UNLOAD Jan 16 21:20:01.874000 audit: BPF prog-id=36 op=LOAD Jan 16 21:20:01.874000 audit: BPF prog-id=21 op=UNLOAD Jan 16 21:20:01.884000 audit: BPF prog-id=37 op=LOAD Jan 16 21:20:01.884000 audit: BPF prog-id=15 op=UNLOAD Jan 16 21:20:01.884000 audit: BPF prog-id=38 op=LOAD Jan 16 21:20:01.885000 audit: BPF prog-id=39 op=LOAD Jan 16 21:20:01.885000 audit: BPF prog-id=16 op=UNLOAD Jan 16 21:20:01.885000 audit: BPF prog-id=17 op=UNLOAD Jan 16 21:20:01.886000 audit: BPF prog-id=40 op=LOAD Jan 16 21:20:01.886000 audit: BPF prog-id=25 op=UNLOAD Jan 16 21:20:01.886000 audit: BPF prog-id=41 op=LOAD Jan 16 21:20:01.886000 audit: BPF prog-id=42 op=LOAD Jan 16 21:20:01.886000 audit: BPF prog-id=26 op=UNLOAD Jan 16 21:20:01.886000 audit: BPF prog-id=27 op=UNLOAD Jan 16 21:20:01.900040 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Jan 16 21:20:01.900279 systemd[1]: Reloading... Jan 16 21:20:01.950564 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 21:20:01.950749 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 21:20:01.953216 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 21:20:01.963371 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 16 21:20:01.963528 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 16 21:20:01.990394 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:20:01.990466 systemd-tmpfiles[1373]: Skipping /boot Jan 16 21:20:01.992334 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 16 21:20:02.018755 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 21:20:02.018973 systemd-tmpfiles[1373]: Skipping /boot Jan 16 21:20:02.105430 zram_generator::config[1406]: No configuration found. Jan 16 21:20:02.300188 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 21:20:02.333876 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 16 21:20:02.347312 kernel: ACPI: button: Power Button [PWRF] Jan 16 21:20:02.385252 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 16 21:20:02.395297 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 16 21:20:02.404518 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 16 21:20:02.566951 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 21:20:02.567941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 21:20:02.579005 systemd[1]: Reloading finished in 678 ms. Jan 16 21:20:02.633757 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 21:20:02.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:02.658000 audit: BPF prog-id=43 op=LOAD Jan 16 21:20:02.658000 audit: BPF prog-id=33 op=UNLOAD Jan 16 21:20:02.658000 audit: BPF prog-id=44 op=LOAD Jan 16 21:20:02.658000 audit: BPF prog-id=45 op=LOAD Jan 16 21:20:02.658000 audit: BPF prog-id=34 op=UNLOAD Jan 16 21:20:02.658000 audit: BPF prog-id=35 op=UNLOAD Jan 16 21:20:02.659000 audit: BPF prog-id=46 op=LOAD Jan 16 21:20:02.659000 audit: BPF prog-id=30 op=UNLOAD Jan 16 21:20:02.659000 audit: BPF prog-id=47 op=LOAD Jan 16 21:20:02.659000 audit: BPF prog-id=48 op=LOAD Jan 16 21:20:02.659000 audit: BPF prog-id=31 op=UNLOAD Jan 16 21:20:02.659000 audit: BPF prog-id=32 op=UNLOAD Jan 16 21:20:02.661000 audit: BPF prog-id=49 op=LOAD Jan 16 21:20:02.661000 audit: BPF prog-id=36 op=UNLOAD Jan 16 21:20:02.664000 audit: BPF prog-id=50 op=LOAD Jan 16 21:20:02.664000 audit: BPF prog-id=37 op=UNLOAD Jan 16 21:20:02.666000 audit: BPF prog-id=51 op=LOAD Jan 16 21:20:02.670000 audit: BPF prog-id=52 op=LOAD Jan 16 21:20:02.767000 audit: BPF prog-id=38 op=UNLOAD Jan 16 21:20:02.767000 audit: BPF prog-id=39 op=UNLOAD Jan 16 21:20:02.782000 audit: BPF prog-id=53 op=LOAD Jan 16 21:20:02.784000 audit: BPF prog-id=54 op=LOAD Jan 16 21:20:02.784000 audit: BPF prog-id=28 op=UNLOAD Jan 16 21:20:02.784000 audit: BPF prog-id=29 op=UNLOAD Jan 16 21:20:02.787000 audit: BPF prog-id=55 op=LOAD Jan 16 21:20:02.789000 audit: BPF prog-id=40 op=UNLOAD Jan 16 21:20:02.790000 audit: BPF prog-id=56 op=LOAD Jan 16 21:20:02.791000 audit: BPF prog-id=57 op=LOAD Jan 16 21:20:02.792000 audit: BPF prog-id=41 op=UNLOAD Jan 16 21:20:02.792000 audit: BPF prog-id=42 op=UNLOAD Jan 16 21:20:02.810413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 21:20:02.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:02.925624 systemd[1]: Finished ensure-sysext.service. Jan 16 21:20:02.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:02.965238 kernel: kvm_amd: TSC scaling supported Jan 16 21:20:02.965340 kernel: kvm_amd: Nested Virtualization enabled Jan 16 21:20:02.965444 kernel: kvm_amd: Nested Paging enabled Jan 16 21:20:02.974265 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 16 21:20:02.974313 kernel: kvm_amd: PMU virtualization is disabled Jan 16 21:20:02.978214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:20:02.981768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:20:03.106411 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 21:20:03.121412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 21:20:03.127865 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 21:20:03.140177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 21:20:03.155457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 21:20:03.184781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 21:20:03.196943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 21:20:03.197488 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 21:20:03.201752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 21:20:03.219738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 21:20:03.231453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 21:20:03.237196 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 21:20:03.262000 audit: BPF prog-id=58 op=LOAD Jan 16 21:20:03.297535 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 21:20:03.307000 audit: BPF prog-id=59 op=LOAD Jan 16 21:20:03.310457 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 21:20:03.324326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 21:20:03.337915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 21:20:03.342977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 21:20:03.352969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 21:20:03.356731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 21:20:03.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.376462 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 21:20:03.377165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 21:20:03.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.385181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 21:20:03.385494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 21:20:03.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:03.389338 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 21:20:03.390000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 21:20:03.390000 audit[1522]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3109a5f0 a2=420 a3=0 items=0 ppid=1488 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:03.390000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:20:03.391800 augenrules[1522]: No rules Jan 16 21:20:03.390530 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 21:20:03.398859 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:20:03.399597 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:20:03.402483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 21:20:03.416249 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 21:20:03.443148 kernel: EDAC MC: Ver: 3.0.0 Jan 16 21:20:03.450333 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 21:20:03.458952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 21:20:03.459768 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 21:20:03.460014 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 21:20:03.476907 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 21:20:03.587570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 21:20:03.600443 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 21:20:03.604328 systemd-networkd[1513]: lo: Link UP Jan 16 21:20:03.604337 systemd-networkd[1513]: lo: Gained carrier Jan 16 21:20:03.607543 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:20:03.607550 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 21:20:03.609564 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 21:20:03.611503 systemd-networkd[1513]: eth0: Link UP Jan 16 21:20:03.612844 systemd-networkd[1513]: eth0: Gained carrier Jan 16 21:20:03.612858 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 21:20:03.618359 systemd[1]: Reached target network.target - Network. Jan 16 21:20:03.623387 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 21:20:03.631856 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 21:20:03.643267 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 21:20:03.653300 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 16 21:20:03.656026 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Jan 16 21:20:04.866657 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 16 21:20:04.867245 systemd-resolved[1281]: Clock change detected. Flushing caches. Jan 16 21:20:04.867562 systemd-timesyncd[1516]: Initial clock synchronization to Fri 2026-01-16 21:20:04.866478 UTC. Jan 16 21:20:04.888943 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 21:20:05.591462 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 21:20:05.601909 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 21:20:05.615434 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 21:20:05.658627 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 21:20:05.666952 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 21:20:05.674849 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 21:20:05.684542 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 21:20:05.693748 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 16 21:20:05.704956 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 21:20:05.715055 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 21:20:05.725877 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 21:20:05.735218 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 21:20:05.748335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 21:20:05.759210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 21:20:05.759324 systemd[1]: Reached target paths.target - Path Units. Jan 16 21:20:05.765394 systemd[1]: Reached target timers.target - Timer Units. Jan 16 21:20:05.776646 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 21:20:05.787913 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 21:20:05.798875 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 21:20:05.807887 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 21:20:05.815544 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 21:20:05.852259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 21:20:05.859919 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 21:20:05.869517 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 21:20:05.880507 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 21:20:05.886208 systemd[1]: Reached target basic.target - Basic System. Jan 16 21:20:05.896745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:20:05.896834 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 21:20:05.899950 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 21:20:05.929787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 21:20:05.937760 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 21:20:05.947250 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 21:20:05.974573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 21:20:05.980780 jq[1557]: false Jan 16 21:20:05.983465 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 21:20:05.985927 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 16 21:20:05.994801 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 21:20:06.002362 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 21:20:06.014064 extend-filesystems[1558]: Found /dev/vda6 Jan 16 21:20:06.011797 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 21:20:06.020351 oslogin_cache_refresh[1559]: Refreshing passwd entry cache Jan 16 21:20:06.033267 extend-filesystems[1558]: Found /dev/vda9 Jan 16 21:20:06.033267 extend-filesystems[1558]: Checking size of /dev/vda9 Jan 16 21:20:06.069356 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 16 21:20:06.069395 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing passwd entry cache Jan 16 21:20:06.069395 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting users, quitting Jan 16 21:20:06.069395 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:20:06.069395 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Refreshing group entry cache Jan 16 21:20:06.028419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 21:20:06.070044 extend-filesystems[1558]: Resized partition /dev/vda9 Jan 16 21:20:06.061433 oslogin_cache_refresh[1559]: Failure getting users, quitting Jan 16 21:20:06.044045 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 21:20:06.093857 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 21:20:06.102898 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Failure getting groups, quitting Jan 16 21:20:06.102898 google_oslogin_nss_cache[1559]: oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:20:06.061459 oslogin_cache_refresh[1559]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 16 21:20:06.058289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 21:20:06.061530 oslogin_cache_refresh[1559]: Refreshing group entry cache Jan 16 21:20:06.059230 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 21:20:06.092879 oslogin_cache_refresh[1559]: Failure getting groups, quitting Jan 16 21:20:06.063621 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 21:20:06.092898 oslogin_cache_refresh[1559]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 16 21:20:06.075837 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 21:20:06.106268 jq[1580]: true Jan 16 21:20:06.096666 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 21:20:06.115167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 21:20:06.121946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 21:20:06.122507 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 16 21:20:06.122874 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 16 21:20:06.137319 update_engine[1577]: I20260116 21:20:06.136543 1577 main.cc:92] Flatcar Update Engine starting Jan 16 21:20:06.135417 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 21:20:06.137498 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 21:20:06.156270 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 16 21:20:06.159034 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 21:20:06.159778 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 21:20:06.197224 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 21:20:06.197224 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 16 21:20:06.197224 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 16 21:20:06.224950 extend-filesystems[1558]: Resized filesystem in /dev/vda9 Jan 16 21:20:06.230515 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 21:20:06.232469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 21:20:06.259400 jq[1594]: true Jan 16 21:20:06.283837 tar[1592]: linux-amd64/LICENSE Jan 16 21:20:06.283837 tar[1592]: linux-amd64/helm Jan 16 21:20:06.297490 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Jan 16 21:20:06.297531 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 21:20:06.304662 systemd-logind[1575]: New seat seat0. Jan 16 21:20:06.345448 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 21:20:06.356500 dbus-daemon[1555]: [system] SELinux support is enabled Jan 16 21:20:06.356998 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 21:20:06.368767 update_engine[1577]: I20260116 21:20:06.364573 1577 update_check_scheduler.cc:74] Next update check in 2m48s Jan 16 21:20:06.370468 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 21:20:06.370508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 21:20:06.376877 dbus-daemon[1555]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 16 21:20:06.379439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 21:20:06.379471 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 21:20:06.387501 systemd[1]: Started update-engine.service - Update Engine. Jan 16 21:20:06.398671 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 21:20:06.435606 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 21:20:06.440371 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Jan 16 21:20:06.444583 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 21:20:06.461324 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 16 21:20:06.491810 systemd-networkd[1513]: eth0: Gained IPv6LL Jan 16 21:20:06.504020 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 21:20:06.511546 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 21:20:06.515993 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 21:20:06.539778 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 16 21:20:06.556451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:06.572408 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 21:20:06.585916 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 21:20:06.612927 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 21:20:06.686331 containerd[1596]: time="2026-01-16T21:20:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 21:20:06.691180 containerd[1596]: time="2026-01-16T21:20:06.690940667Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 21:20:06.691948 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 21:20:06.692611 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 21:20:06.712999 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.719650887Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.072µs" Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.719798423Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.719855129Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.719876649Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.720394725Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 21:20:06.720489 containerd[1596]: time="2026-01-16T21:20:06.720424301Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721338 containerd[1596]: time="2026-01-16T21:20:06.720925136Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721338 containerd[1596]: time="2026-01-16T21:20:06.720946666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721558 containerd[1596]: time="2026-01-16T21:20:06.721458242Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721558 containerd[1596]: time="2026-01-16T21:20:06.721550113Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721618 containerd[1596]: time="2026-01-16T21:20:06.721572605Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 21:20:06.721618 containerd[1596]: time="2026-01-16T21:20:06.721590809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722004 containerd[1596]: time="2026-01-16T21:20:06.721887453Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722004 containerd[1596]: time="2026-01-16T21:20:06.721970318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722648 containerd[1596]: time="2026-01-16T21:20:06.722204274Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722648 containerd[1596]: time="2026-01-16T21:20:06.722503412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722648 containerd[1596]: time="2026-01-16T21:20:06.722552605Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 21:20:06.722648 containerd[1596]: time="2026-01-16T21:20:06.722567553Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 21:20:06.722648 containerd[1596]: time="2026-01-16T21:20:06.722600283Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 21:20:06.723342 containerd[1596]: time="2026-01-16T21:20:06.723224088Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 21:20:06.723396 containerd[1596]: time="2026-01-16T21:20:06.723364240Z" level=info msg="metadata content store policy set" policy=shared Jan 16 21:20:06.725011 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 21:20:06.741527 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 16 21:20:06.742220 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 16 21:20:06.748468 containerd[1596]: time="2026-01-16T21:20:06.748303660Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 21:20:06.748571 containerd[1596]: time="2026-01-16T21:20:06.748517930Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:20:06.748752 containerd[1596]: time="2026-01-16T21:20:06.748616594Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 21:20:06.749891 containerd[1596]: time="2026-01-16T21:20:06.749348912Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 21:20:06.749891 containerd[1596]: time="2026-01-16T21:20:06.749526964Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 21:20:06.750031 containerd[1596]: time="2026-01-16T21:20:06.749850618Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.750024202Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.750196624Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.750361391Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.750529826Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.750892964Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.751067340Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.751247325Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 21:20:06.751874 containerd[1596]: time="2026-01-16T21:20:06.751417584Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 21:20:06.754860 containerd[1596]: time="2026-01-16T21:20:06.753342948Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 21:20:06.754860 containerd[1596]: time="2026-01-16T21:20:06.754389712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 21:20:06.754860 containerd[1596]: time="2026-01-16T21:20:06.754413537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 21:20:06.754860 containerd[1596]: time="2026-01-16T21:20:06.754580338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 21:20:06.754860 containerd[1596]: time="2026-01-16T21:20:06.754796351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 21:20:06.755266 containerd[1596]: time="2026-01-16T21:20:06.754969184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 21:20:06.755310 containerd[1596]: time="2026-01-16T21:20:06.755183434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 21:20:06.755541 containerd[1596]: time="2026-01-16T21:20:06.755344666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.756068306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.758562292Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.758886047Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.759416948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.759513849Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 21:20:06.759479 containerd[1596]: time="2026-01-16T21:20:06.759532183Z" level=info msg="Start snapshots syncer" Jan 16 21:20:06.758638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 21:20:06.759970 containerd[1596]: time="2026-01-16T21:20:06.759847702Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 21:20:06.761070 containerd[1596]: time="2026-01-16T21:20:06.760300377Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 21:20:06.761070 containerd[1596]: time="2026-01-16T21:20:06.760374435Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 21:20:06.761586 containerd[1596]: time="2026-01-16T21:20:06.761390032Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.762950154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763207044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763224457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763361903Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763492657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763514077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763648788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763669317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.763873207Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.764038967Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.764058984Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.764072259Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.764307238Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 21:20:06.764952 containerd[1596]: time="2026-01-16T21:20:06.764321354Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764336653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764352011Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764368432Z" level=info msg="runtime interface created" Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764377519Z" level=info msg="created NRI interface" Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764391575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764509065Z" level=info msg="Connect containerd service" Jan 16 21:20:06.765471 containerd[1596]: time="2026-01-16T21:20:06.764536306Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 21:20:06.767522 containerd[1596]: time="2026-01-16T21:20:06.767413424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 21:20:06.776531 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 21:20:06.794039 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 21:20:06.810644 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 21:20:06.819943 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 21:20:06.971855 tar[1592]: linux-amd64/README.md Jan 16 21:20:06.979590 containerd[1596]: time="2026-01-16T21:20:06.979339241Z" level=info msg="Start subscribing containerd event" Jan 16 21:20:06.979590 containerd[1596]: time="2026-01-16T21:20:06.979449197Z" level=info msg="Start recovering state" Jan 16 21:20:06.979590 containerd[1596]: time="2026-01-16T21:20:06.979587495Z" level=info msg="Start event monitor" Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979608544Z" level=info msg="Start cni network conf syncer for default" Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979622620Z" level=info msg="Start streaming server" Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979632669Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979641386Z" level=info msg="runtime interface starting up..." Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979649110Z" level=info msg="starting plugins..." Jan 16 21:20:06.979859 containerd[1596]: time="2026-01-16T21:20:06.979668166Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 21:20:06.980657 containerd[1596]: time="2026-01-16T21:20:06.980628618Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 21:20:06.983216 containerd[1596]: time="2026-01-16T21:20:06.982999014Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 21:20:06.983360 containerd[1596]: time="2026-01-16T21:20:06.983273938Z" level=info msg="containerd successfully booted in 0.299267s" Jan 16 21:20:06.983837 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 21:20:07.021043 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 21:20:08.313808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:08.322331 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 21:20:08.331041 systemd[1]: Startup finished in 10.027s (kernel) + 24.197s (initrd) + 9.828s (userspace) = 44.052s. Jan 16 21:20:08.337774 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:20:09.224014 kubelet[1694]: E0116 21:20:09.223793 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:20:09.230231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:20:09.230781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:20:09.231625 systemd[1]: kubelet.service: Consumed 1.325s CPU time, 264.9M memory peak. Jan 16 21:20:15.261915 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 21:20:15.267892 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:46330.service - OpenSSH per-connection server daemon (10.0.0.1:46330). Jan 16 21:20:15.479467 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 46330 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:15.487496 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:15.504472 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 21:20:15.507286 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 21:20:15.518943 systemd-logind[1575]: New session 1 of user core. Jan 16 21:20:15.552040 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 21:20:15.557071 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 21:20:15.591015 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:15.599596 systemd-logind[1575]: New session 2 of user core. Jan 16 21:20:15.809986 systemd[1714]: Queued start job for default target default.target. Jan 16 21:20:15.822891 systemd[1714]: Created slice app.slice - User Application Slice. Jan 16 21:20:15.823001 systemd[1714]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 21:20:15.823022 systemd[1714]: Reached target paths.target - Paths. Jan 16 21:20:15.823229 systemd[1714]: Reached target timers.target - Timers. Jan 16 21:20:15.825917 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 21:20:15.827630 systemd[1714]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 21:20:15.849332 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 21:20:15.849941 systemd[1714]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 21:20:15.850333 systemd[1714]: Reached target sockets.target - Sockets. Jan 16 21:20:15.850390 systemd[1714]: Reached target basic.target - Basic System. Jan 16 21:20:15.850444 systemd[1714]: Reached target default.target - Main User Target. Jan 16 21:20:15.850483 systemd[1714]: Startup finished in 239ms. Jan 16 21:20:15.851408 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 21:20:15.862666 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 21:20:15.893407 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:46332.service - OpenSSH per-connection server daemon (10.0.0.1:46332). Jan 16 21:20:16.016485 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 46332 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:16.021037 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:16.037234 systemd-logind[1575]: New session 3 of user core. Jan 16 21:20:16.053858 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 21:20:16.096049 sshd[1732]: Connection closed by 10.0.0.1 port 46332 Jan 16 21:20:16.095663 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:16.108528 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:46332.service: Deactivated successfully. Jan 16 21:20:16.110883 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 21:20:16.113197 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Jan 16 21:20:16.117846 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:46338.service - OpenSSH per-connection server daemon (10.0.0.1:46338). Jan 16 21:20:16.119608 systemd-logind[1575]: Removed session 3. Jan 16 21:20:16.246671 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 46338 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:16.249921 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:16.271551 systemd-logind[1575]: New session 4 of user core. Jan 16 21:20:16.290487 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 21:20:16.321460 sshd[1743]: Connection closed by 10.0.0.1 port 46338 Jan 16 21:20:16.323946 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:16.343394 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:46338.service: Deactivated successfully. Jan 16 21:20:16.348959 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 21:20:16.351191 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Jan 16 21:20:16.356666 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Jan 16 21:20:16.358701 systemd-logind[1575]: Removed session 4. Jan 16 21:20:16.455368 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:16.458422 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:16.474620 systemd-logind[1575]: New session 5 of user core. Jan 16 21:20:16.489723 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 21:20:16.530512 sshd[1753]: Connection closed by 10.0.0.1 port 46352 Jan 16 21:20:16.531065 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:16.554196 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:46352.service: Deactivated successfully. Jan 16 21:20:16.559487 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 21:20:16.562480 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Jan 16 21:20:16.569055 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:46354.service - OpenSSH per-connection server daemon (10.0.0.1:46354). Jan 16 21:20:16.573710 systemd-logind[1575]: Removed session 5. Jan 16 21:20:16.694871 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:16.703681 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:16.716503 systemd-logind[1575]: New session 6 of user core. Jan 16 21:20:16.734666 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 21:20:16.807400 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 21:20:16.807999 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:20:16.826893 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 16 21:20:16.831893 sshd[1764]: Connection closed by 10.0.0.1 port 46354 Jan 16 21:20:16.831840 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:16.851028 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:46354.service: Deactivated successfully. Jan 16 21:20:16.855398 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 21:20:16.857691 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Jan 16 21:20:16.867626 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:46364.service - OpenSSH per-connection server daemon (10.0.0.1:46364). Jan 16 21:20:16.873201 systemd-logind[1575]: Removed session 6. Jan 16 21:20:16.992887 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 46364 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:16.995896 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:17.013337 systemd-logind[1575]: New session 7 of user core. Jan 16 21:20:17.027823 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 21:20:17.080449 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 21:20:17.081324 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:20:17.093640 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 16 21:20:17.112860 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 21:20:17.113863 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:20:17.133716 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 21:20:17.249000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:20:17.255896 kernel: kauditd_printk_skb: 116 callbacks suppressed Jan 16 21:20:17.255980 kernel: audit: type=1305 audit(1768598417.249:220): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 21:20:17.260676 augenrules[1802]: No rules Jan 16 21:20:17.249000 audit[1802]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6b618f30 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:17.273714 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 21:20:17.274390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 21:20:17.276610 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 16 21:20:17.295027 sshd[1776]: Connection closed by 10.0.0.1 port 46364 Jan 16 21:20:17.296338 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 16 21:20:17.249000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:20:17.308852 kernel: audit: type=1300 audit(1768598417.249:220): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6b618f30 a2=420 a3=0 items=0 ppid=1783 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:17.308899 kernel: audit: type=1327 audit(1768598417.249:220): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 21:20:17.308940 kernel: audit: type=1106 audit(1768598417.273:221): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.273000 audit[1777]: USER_END pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.331018 kernel: audit: type=1130 audit(1768598417.273:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.368868 kernel: audit: type=1131 audit(1768598417.273:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.368953 kernel: audit: type=1104 audit(1768598417.273:224): pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.273000 audit[1777]: CRED_DISP pid=1777 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.391415 kernel: audit: type=1106 audit(1768598417.296:225): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.296000 audit[1772]: USER_END pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.296000 audit[1772]: CRED_DISP pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.443495 kernel: audit: type=1104 audit(1768598417.296:226): pid=1772 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.457440 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:46364.service: Deactivated successfully. Jan 16 21:20:17.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:46364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.464058 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 21:20:17.474030 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Jan 16 21:20:17.478939 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:46380.service - OpenSSH per-connection server daemon (10.0.0.1:46380). Jan 16 21:20:17.484017 systemd-logind[1575]: Removed session 7. Jan 16 21:20:17.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:46380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.485315 kernel: audit: type=1131 audit(1768598417.457:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.59:22-10.0.0.1:46364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.594000 audit[1811]: USER_ACCT pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.598240 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 46380 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:20:17.597000 audit[1811]: CRED_ACQ pid=1811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.597000 audit[1811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77fe0b70 a2=3 a3=0 items=0 ppid=1 pid=1811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:17.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:20:17.601974 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:20:17.623320 systemd-logind[1575]: New session 8 of user core. Jan 16 21:20:17.636594 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 21:20:17.647000 audit[1811]: USER_START pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.657000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:20:17.687000 audit[1816]: USER_ACCT pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.689240 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 21:20:17.688000 audit[1816]: CRED_REFR pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.688000 audit[1816]: USER_START pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:20:17.690028 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 21:20:18.605068 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 21:20:18.636716 (dockerd)[1837]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 21:20:19.248151 dockerd[1837]: time="2026-01-16T21:20:19.247900562Z" level=info msg="Starting up" Jan 16 21:20:19.250052 dockerd[1837]: time="2026-01-16T21:20:19.249975838Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 21:20:19.251326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 21:20:19.256986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:19.301890 dockerd[1837]: time="2026-01-16T21:20:19.301530681Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 21:20:19.476300 systemd[1]: var-lib-docker-metacopy\x2dcheck1956793862-merged.mount: Deactivated successfully. Jan 16 21:20:19.606951 dockerd[1837]: time="2026-01-16T21:20:19.606298388Z" level=info msg="Loading containers: start." Jan 16 21:20:19.659349 kernel: Initializing XFRM netlink socket Jan 16 21:20:19.675404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:19.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:19.697029 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:20:19.873254 kubelet[1871]: E0116 21:20:19.872867 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:20:19.880282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:20:19.880599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:20:19.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:19.881543 systemd[1]: kubelet.service: Consumed 356ms CPU time, 111M memory peak. Jan 16 21:20:19.950000 audit[1907]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.950000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc9b367cb0 a2=0 a3=0 items=0 ppid=1837 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:20:19.959000 audit[1909]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.959000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd9ea98120 a2=0 a3=0 items=0 ppid=1837 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:20:19.967000 audit[1911]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.967000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebfaca4f0 a2=0 a3=0 items=0 ppid=1837 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:20:19.975000 audit[1913]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.975000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe97bae1a0 a2=0 a3=0 items=0 ppid=1837 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:20:19.982000 audit[1915]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.982000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb1b841e0 a2=0 a3=0 items=0 ppid=1837 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:20:19.993000 audit[1917]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:19.993000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffeb910400 a2=0 a3=0 items=0 ppid=1837 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:19.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:20:20.011000 audit[1919]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.011000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff78fa0670 a2=0 a3=0 items=0 ppid=1837 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:20:20.024000 audit[1921]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.024000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff99fa58b0 a2=0 a3=0 items=0 ppid=1837 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:20:20.151000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.151000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd05627fb0 a2=0 a3=0 items=0 ppid=1837 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 21:20:20.161000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.161000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd70770b20 a2=0 a3=0 items=0 ppid=1837 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:20:20.169000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.169000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe66983a10 a2=0 a3=0 items=0 ppid=1837 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:20:20.180000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.180000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffea142300 a2=0 a3=0 items=0 ppid=1837 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:20:20.196000 audit[1932]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.196000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdbd56ec80 a2=0 a3=0 items=0 ppid=1837 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:20:20.368000 audit[1962]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.368000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe93461c20 a2=0 a3=0 items=0 ppid=1837 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.368000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 21:20:20.378000 audit[1964]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.378000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcce747360 a2=0 a3=0 items=0 ppid=1837 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 21:20:20.388000 audit[1966]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.388000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe141b0400 a2=0 a3=0 items=0 ppid=1837 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 21:20:20.398000 audit[1968]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.398000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0e2d3d70 a2=0 a3=0 items=0 ppid=1837 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 21:20:20.408000 audit[1970]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.408000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffaf3641d0 a2=0 a3=0 items=0 ppid=1837 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 21:20:20.422000 audit[1972]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.422000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe61a3ef40 a2=0 a3=0 items=0 ppid=1837 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:20:20.439000 audit[1974]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.439000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdaa3b3f30 a2=0 a3=0 items=0 ppid=1837 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.439000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:20:20.451000 audit[1976]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.451000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe580e21e0 a2=0 a3=0 items=0 ppid=1837 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 21:20:20.467000 audit[1978]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.467000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc5ebdedf0 a2=0 a3=0 items=0 ppid=1837 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 21:20:20.479000 audit[1980]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.479000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf1915c40 a2=0 a3=0 items=0 ppid=1837 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 21:20:20.488000 audit[1982]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.488000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffefed0820 a2=0 a3=0 items=0 ppid=1837 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 21:20:20.497000 audit[1984]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.497000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc450646f0 a2=0 a3=0 items=0 ppid=1837 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.497000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 21:20:20.513000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.513000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcb5204110 a2=0 a3=0 items=0 ppid=1837 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 21:20:20.543000 audit[1991]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.543000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe872819b0 a2=0 a3=0 items=0 ppid=1837 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:20:20.557000 audit[1993]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.557000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd1824dbe0 a2=0 a3=0 items=0 ppid=1837 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:20:20.570000 audit[1995]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.570000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc4e1371c0 a2=0 a3=0 items=0 ppid=1837 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.570000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:20:20.582000 audit[1997]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.582000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdfbc477a0 a2=0 a3=0 items=0 ppid=1837 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 21:20:20.593000 audit[1999]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.593000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd3f188b40 a2=0 a3=0 items=0 ppid=1837 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 21:20:20.609000 audit[2001]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:20.609000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff70215150 a2=0 a3=0 items=0 ppid=1837 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 21:20:20.679000 audit[2006]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.679000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd96210f10 a2=0 a3=0 items=0 ppid=1837 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 21:20:20.696000 audit[2008]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.696000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc14b6a4a0 a2=0 a3=0 items=0 ppid=1837 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 21:20:20.742000 audit[2016]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.742000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd11180610 a2=0 a3=0 items=0 ppid=1837 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 21:20:20.787000 audit[2022]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.787000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeb4ee27f0 a2=0 a3=0 items=0 ppid=1837 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 21:20:20.799000 audit[2024]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.799000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe3f04db20 a2=0 a3=0 items=0 ppid=1837 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 21:20:20.814000 audit[2026]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.814000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe37d0a600 a2=0 a3=0 items=0 ppid=1837 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 21:20:20.826000 audit[2028]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.826000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffce458da80 a2=0 a3=0 items=0 ppid=1837 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 21:20:20.838000 audit[2030]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:20.838000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef0ec5460 a2=0 a3=0 items=0 ppid=1837 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:20.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 21:20:20.841280 systemd-networkd[1513]: docker0: Link UP Jan 16 21:20:20.862953 dockerd[1837]: time="2026-01-16T21:20:20.862444025Z" level=info msg="Loading containers: done." Jan 16 21:20:20.899055 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3512573561-merged.mount: Deactivated successfully. Jan 16 21:20:20.918611 dockerd[1837]: time="2026-01-16T21:20:20.918344440Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 21:20:20.922525 dockerd[1837]: time="2026-01-16T21:20:20.922359977Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 21:20:20.922525 dockerd[1837]: time="2026-01-16T21:20:20.922497904Z" level=info msg="Initializing buildkit" Jan 16 21:20:21.065570 dockerd[1837]: time="2026-01-16T21:20:21.065308499Z" level=info msg="Completed buildkit initialization" Jan 16 21:20:21.075537 dockerd[1837]: time="2026-01-16T21:20:21.073719942Z" level=info msg="Daemon has completed initialization" Jan 16 21:20:21.075537 dockerd[1837]: time="2026-01-16T21:20:21.074299784Z" level=info msg="API listen on /run/docker.sock" Jan 16 21:20:21.074922 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 21:20:21.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:22.553494 containerd[1596]: time="2026-01-16T21:20:22.553428480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 21:20:23.514484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170367872.mount: Deactivated successfully. Jan 16 21:20:26.881185 containerd[1596]: time="2026-01-16T21:20:26.880917612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:26.884637 containerd[1596]: time="2026-01-16T21:20:26.884506953Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28338157" Jan 16 21:20:26.887507 containerd[1596]: time="2026-01-16T21:20:26.887419372Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:26.891643 containerd[1596]: time="2026-01-16T21:20:26.891553546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:26.893314 containerd[1596]: time="2026-01-16T21:20:26.893226329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.339758987s" Jan 16 21:20:26.893314 containerd[1596]: time="2026-01-16T21:20:26.893291821Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 16 21:20:26.896601 containerd[1596]: time="2026-01-16T21:20:26.896383273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 21:20:28.658581 containerd[1596]: time="2026-01-16T21:20:28.658465275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:28.662323 containerd[1596]: time="2026-01-16T21:20:28.662038000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24987951" Jan 16 21:20:28.666621 containerd[1596]: time="2026-01-16T21:20:28.666470034Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:28.673385 containerd[1596]: time="2026-01-16T21:20:28.673306634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:28.675929 containerd[1596]: time="2026-01-16T21:20:28.675724066Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.779302742s" Jan 16 21:20:28.675929 containerd[1596]: time="2026-01-16T21:20:28.675800208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 16 21:20:28.676515 containerd[1596]: time="2026-01-16T21:20:28.676456815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 21:20:30.029999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 21:20:30.035678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:30.317155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:30.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:30.322283 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 16 21:20:30.322412 kernel: audit: type=1130 audit(1768598430.315:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:30.344249 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:20:30.440134 kubelet[2150]: E0116 21:20:30.439961 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:20:30.443743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:20:30.444169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:20:30.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:30.444964 systemd[1]: kubelet.service: Consumed 281ms CPU time, 110.9M memory peak. Jan 16 21:20:30.459292 kernel: audit: type=1131 audit(1768598430.443:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:30.674979 containerd[1596]: time="2026-01-16T21:20:30.674758134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:30.679192 containerd[1596]: time="2026-01-16T21:20:30.679003290Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 16 21:20:30.684550 containerd[1596]: time="2026-01-16T21:20:30.684465633Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:30.696761 containerd[1596]: time="2026-01-16T21:20:30.692939428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:30.696761 containerd[1596]: time="2026-01-16T21:20:30.694385157Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.017896782s" Jan 16 21:20:30.696761 containerd[1596]: time="2026-01-16T21:20:30.694415043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 16 21:20:30.696761 containerd[1596]: time="2026-01-16T21:20:30.695888383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 21:20:32.851888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217774458.mount: Deactivated successfully. Jan 16 21:20:35.930467 containerd[1596]: time="2026-01-16T21:20:35.929428542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:35.939734 containerd[1596]: time="2026-01-16T21:20:35.937904677Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 16 21:20:35.943531 containerd[1596]: time="2026-01-16T21:20:35.940827490Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:35.951591 containerd[1596]: time="2026-01-16T21:20:35.951439162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:35.962409 containerd[1596]: time="2026-01-16T21:20:35.953361591Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 5.257441448s" Jan 16 21:20:35.962409 containerd[1596]: time="2026-01-16T21:20:35.953443233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 16 21:20:35.966670 containerd[1596]: time="2026-01-16T21:20:35.965819277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 21:20:37.342416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793845011.mount: Deactivated successfully. Jan 16 21:20:40.533952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 21:20:40.546327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:40.899741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:40.917353 kernel: audit: type=1130 audit(1768598440.899:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:40.927944 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:20:40.938885 containerd[1596]: time="2026-01-16T21:20:40.938668085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:40.940626 containerd[1596]: time="2026-01-16T21:20:40.940535692Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18431446" Jan 16 21:20:40.942150 containerd[1596]: time="2026-01-16T21:20:40.942052555Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:40.947404 containerd[1596]: time="2026-01-16T21:20:40.947186826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:40.948304 containerd[1596]: time="2026-01-16T21:20:40.947971301Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.981317827s" Jan 16 21:20:40.948304 containerd[1596]: time="2026-01-16T21:20:40.948034016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 16 21:20:40.949530 containerd[1596]: time="2026-01-16T21:20:40.949294884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 21:20:41.049571 kubelet[2229]: E0116 21:20:41.049359 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:20:41.056988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:20:41.058381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:20:41.066932 systemd[1]: kubelet.service: Consumed 375ms CPU time, 110.6M memory peak. Jan 16 21:20:41.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:41.087157 kernel: audit: type=1131 audit(1768598441.065:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:41.540670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116031347.mount: Deactivated successfully. Jan 16 21:20:41.579252 containerd[1596]: time="2026-01-16T21:20:41.578219876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:20:41.582828 containerd[1596]: time="2026-01-16T21:20:41.582695424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 21:20:41.585873 containerd[1596]: time="2026-01-16T21:20:41.584705128Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:20:41.589533 containerd[1596]: time="2026-01-16T21:20:41.589345681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 21:20:41.590557 containerd[1596]: time="2026-01-16T21:20:41.590373379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 641.044605ms" Jan 16 21:20:41.590557 containerd[1596]: time="2026-01-16T21:20:41.590445833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 16 21:20:41.592553 containerd[1596]: time="2026-01-16T21:20:41.592523418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 21:20:42.493594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992986832.mount: Deactivated successfully. Jan 16 21:20:47.696938 containerd[1596]: time="2026-01-16T21:20:47.696512512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:47.700858 containerd[1596]: time="2026-01-16T21:20:47.700293453Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55729194" Jan 16 21:20:47.702921 containerd[1596]: time="2026-01-16T21:20:47.702713750Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:47.709169 containerd[1596]: time="2026-01-16T21:20:47.708607987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:20:47.710158 containerd[1596]: time="2026-01-16T21:20:47.710043073Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.117482386s" Jan 16 21:20:47.710242 containerd[1596]: time="2026-01-16T21:20:47.710194934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 16 21:20:51.281721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 21:20:51.287527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:51.310907 update_engine[1577]: I20260116 21:20:51.310283 1577 update_attempter.cc:509] Updating boot flags... Jan 16 21:20:51.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:51.917256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:51.931209 kernel: audit: type=1130 audit(1768598451.916:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:51.973629 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 21:20:52.780840 kubelet[2343]: E0116 21:20:52.780056 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 21:20:52.801443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 21:20:52.801835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 21:20:52.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:52.806325 systemd[1]: kubelet.service: Consumed 943ms CPU time, 110M memory peak. Jan 16 21:20:52.822671 kernel: audit: type=1131 audit(1768598452.805:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 21:20:52.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:52.856901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:52.857747 systemd[1]: kubelet.service: Consumed 943ms CPU time, 110M memory peak. Jan 16 21:20:52.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:52.882775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:52.901520 kernel: audit: type=1130 audit(1768598452.857:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:52.901719 kernel: audit: type=1131 audit(1768598452.857:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:52.947384 systemd[1]: Reload requested from client PID 2360 ('systemctl') (unit session-8.scope)... Jan 16 21:20:52.947485 systemd[1]: Reloading... Jan 16 21:20:53.113205 zram_generator::config[2403]: No configuration found. Jan 16 21:20:53.551860 systemd[1]: Reloading finished in 603 ms. Jan 16 21:20:53.596000 audit: BPF prog-id=63 op=LOAD Jan 16 21:20:53.596000 audit: BPF prog-id=50 op=UNLOAD Jan 16 21:20:53.608296 kernel: audit: type=1334 audit(1768598453.596:288): prog-id=63 op=LOAD Jan 16 21:20:53.608375 kernel: audit: type=1334 audit(1768598453.596:289): prog-id=50 op=UNLOAD Jan 16 21:20:53.608421 kernel: audit: type=1334 audit(1768598453.596:290): prog-id=64 op=LOAD Jan 16 21:20:53.596000 audit: BPF prog-id=64 op=LOAD Jan 16 21:20:53.612889 kernel: audit: type=1334 audit(1768598453.596:291): prog-id=65 op=LOAD Jan 16 21:20:53.596000 audit: BPF prog-id=65 op=LOAD Jan 16 21:20:53.617672 kernel: audit: type=1334 audit(1768598453.596:292): prog-id=51 op=UNLOAD Jan 16 21:20:53.596000 audit: BPF prog-id=51 op=UNLOAD Jan 16 21:20:53.596000 audit: BPF prog-id=52 op=UNLOAD Jan 16 21:20:53.625758 kernel: audit: type=1334 audit(1768598453.596:293): prog-id=52 op=UNLOAD Jan 16 21:20:53.603000 audit: BPF prog-id=66 op=LOAD Jan 16 21:20:53.634000 audit: BPF prog-id=49 op=UNLOAD Jan 16 21:20:53.638000 audit: BPF prog-id=67 op=LOAD Jan 16 21:20:53.638000 audit: BPF prog-id=43 op=UNLOAD Jan 16 21:20:53.638000 audit: BPF prog-id=68 op=LOAD Jan 16 21:20:53.639000 audit: BPF prog-id=69 op=LOAD Jan 16 21:20:53.639000 audit: BPF prog-id=44 op=UNLOAD Jan 16 21:20:53.639000 audit: BPF prog-id=45 op=UNLOAD Jan 16 21:20:53.645000 audit: BPF prog-id=70 op=LOAD Jan 16 21:20:53.645000 audit: BPF prog-id=60 op=UNLOAD Jan 16 21:20:53.646000 audit: BPF prog-id=71 op=LOAD Jan 16 21:20:53.646000 audit: BPF prog-id=72 op=LOAD Jan 16 21:20:53.646000 audit: BPF prog-id=61 op=UNLOAD Jan 16 21:20:53.646000 audit: BPF prog-id=62 op=UNLOAD Jan 16 21:20:53.649000 audit: BPF prog-id=73 op=LOAD Jan 16 21:20:53.649000 audit: BPF prog-id=55 op=UNLOAD Jan 16 21:20:53.649000 audit: BPF prog-id=74 op=LOAD Jan 16 21:20:53.649000 audit: BPF prog-id=75 op=LOAD Jan 16 21:20:53.649000 audit: BPF prog-id=56 op=UNLOAD Jan 16 21:20:53.649000 audit: BPF prog-id=57 op=UNLOAD Jan 16 21:20:53.651000 audit: BPF prog-id=76 op=LOAD Jan 16 21:20:53.652000 audit: BPF prog-id=59 op=UNLOAD Jan 16 21:20:53.656000 audit: BPF prog-id=77 op=LOAD Jan 16 21:20:53.657000 audit: BPF prog-id=46 op=UNLOAD Jan 16 21:20:53.657000 audit: BPF prog-id=78 op=LOAD Jan 16 21:20:53.657000 audit: BPF prog-id=79 op=LOAD Jan 16 21:20:53.657000 audit: BPF prog-id=47 op=UNLOAD Jan 16 21:20:53.657000 audit: BPF prog-id=48 op=UNLOAD Jan 16 21:20:53.659000 audit: BPF prog-id=80 op=LOAD Jan 16 21:20:53.659000 audit: BPF prog-id=58 op=UNLOAD Jan 16 21:20:53.660000 audit: BPF prog-id=81 op=LOAD Jan 16 21:20:53.661000 audit: BPF prog-id=82 op=LOAD Jan 16 21:20:53.663000 audit: BPF prog-id=53 op=UNLOAD Jan 16 21:20:53.663000 audit: BPF prog-id=54 op=UNLOAD Jan 16 21:20:53.722816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:53.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:53.731661 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:20:53.737064 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:53.739293 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 21:20:53.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:53.739977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:53.740209 systemd[1]: kubelet.service: Consumed 225ms CPU time, 98.5M memory peak. Jan 16 21:20:53.747928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:20:54.129986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:20:54.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:20:54.153285 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:20:54.263186 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:20:54.263186 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:20:54.263186 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:20:54.263660 kubelet[2456]: I0116 21:20:54.263214 2456 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:20:54.659060 kubelet[2456]: I0116 21:20:54.658910 2456 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:20:54.659060 kubelet[2456]: I0116 21:20:54.658990 2456 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:20:54.660441 kubelet[2456]: I0116 21:20:54.660189 2456 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:20:54.729515 kubelet[2456]: E0116 21:20:54.729397 2456 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:54.731969 kubelet[2456]: I0116 21:20:54.731626 2456 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:20:54.860203 kubelet[2456]: I0116 21:20:54.859929 2456 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:20:54.882482 kubelet[2456]: I0116 21:20:54.881671 2456 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:20:54.884435 kubelet[2456]: I0116 21:20:54.882246 2456 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:20:54.886683 kubelet[2456]: I0116 21:20:54.884360 2456 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:20:54.886683 kubelet[2456]: I0116 21:20:54.884691 2456 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:20:54.886683 kubelet[2456]: I0116 21:20:54.884707 2456 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:20:54.886683 kubelet[2456]: I0116 21:20:54.884877 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:20:54.912043 kubelet[2456]: I0116 21:20:54.909913 2456 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:20:54.913543 kubelet[2456]: I0116 21:20:54.912812 2456 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:20:54.915017 kubelet[2456]: I0116 21:20:54.914537 2456 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:20:54.915626 kubelet[2456]: I0116 21:20:54.915030 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:20:54.919466 kubelet[2456]: W0116 21:20:54.919408 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:54.919813 kubelet[2456]: W0116 21:20:54.919625 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:54.921516 kubelet[2456]: E0116 21:20:54.921376 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:54.923351 kubelet[2456]: E0116 21:20:54.923182 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:54.927315 kubelet[2456]: I0116 21:20:54.927070 2456 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:20:54.927771 kubelet[2456]: I0116 21:20:54.927730 2456 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:20:54.928513 kubelet[2456]: W0116 21:20:54.927881 2456 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 21:20:54.935909 kubelet[2456]: I0116 21:20:54.935056 2456 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:20:54.935909 kubelet[2456]: I0116 21:20:54.935210 2456 server.go:1287] "Started kubelet" Jan 16 21:20:54.935909 kubelet[2456]: I0116 21:20:54.935390 2456 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:20:54.942301 kubelet[2456]: I0116 21:20:54.938783 2456 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:20:54.948274 kubelet[2456]: I0116 21:20:54.948041 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:20:54.954752 kubelet[2456]: I0116 21:20:54.950744 2456 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:20:54.954752 kubelet[2456]: E0116 21:20:54.950880 2456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b52eca0b75a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-16 21:20:54.935181843 +0000 UTC m=+0.773205739,LastTimestamp:2026-01-16 21:20:54.935181843 +0000 UTC m=+0.773205739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 16 21:20:54.988740 kubelet[2456]: I0116 21:20:54.987372 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:20:55.004250 kubelet[2456]: I0116 21:20:55.002387 2456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:20:55.012254 kubelet[2456]: E0116 21:20:55.010992 2456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:20:55.012254 kubelet[2456]: I0116 21:20:55.011358 2456 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:20:55.014434 kubelet[2456]: E0116 21:20:55.014327 2456 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:20:55.016834 kubelet[2456]: W0116 21:20:55.014520 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:55.016834 kubelet[2456]: E0116 21:20:55.014746 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:55.016834 kubelet[2456]: I0116 21:20:55.014773 2456 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:20:55.016834 kubelet[2456]: E0116 21:20:55.015058 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Jan 16 21:20:55.016834 kubelet[2456]: I0116 21:20:55.015302 2456 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:20:55.019297 kubelet[2456]: I0116 21:20:55.018339 2456 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:20:55.019297 kubelet[2456]: I0116 21:20:55.018476 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:20:55.021018 kubelet[2456]: I0116 21:20:55.020735 2456 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:20:55.048000 audit[2472]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.048000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc799fcca0 a2=0 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:20:55.052000 audit[2473]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.052000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe10f2d060 a2=0 a3=0 items=0 ppid=2456 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:20:55.061000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.061000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe94738020 a2=0 a3=0 items=0 ppid=2456 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:20:55.069000 audit[2479]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.069000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff0b992060 a2=0 a3=0 items=0 ppid=2456 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:20:55.072946 kubelet[2456]: I0116 21:20:55.072712 2456 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:20:55.072946 kubelet[2456]: I0116 21:20:55.072730 2456 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:20:55.072946 kubelet[2456]: I0116 21:20:55.072750 2456 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:20:55.112234 kubelet[2456]: E0116 21:20:55.111939 2456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:20:55.159880 kubelet[2456]: I0116 21:20:55.159722 2456 policy_none.go:49] "None policy: Start" Jan 16 21:20:55.159880 kubelet[2456]: I0116 21:20:55.159810 2456 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:20:55.159880 kubelet[2456]: I0116 21:20:55.159832 2456 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:20:55.179053 kubelet[2456]: I0116 21:20:55.178890 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:20:55.178000 audit[2482]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.178000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd2f1642f0 a2=0 a3=0 items=0 ppid=2456 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 21:20:55.180000 audit[2484]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:55.180000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde1326ec0 a2=0 a3=0 items=0 ppid=2456 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 21:20:55.181963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 21:20:55.186393 kubelet[2456]: I0116 21:20:55.181247 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:20:55.186393 kubelet[2456]: I0116 21:20:55.181269 2456 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:20:55.186393 kubelet[2456]: I0116 21:20:55.181286 2456 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:20:55.186393 kubelet[2456]: I0116 21:20:55.181292 2456 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:20:55.186393 kubelet[2456]: E0116 21:20:55.181338 2456 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:20:55.183000 audit[2485]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.183000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe74d1e8f0 a2=0 a3=0 items=0 ppid=2456 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:20:55.189000 audit[2487]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.189000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6608ae70 a2=0 a3=0 items=0 ppid=2456 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:20:55.193480 kubelet[2456]: W0116 21:20:55.188929 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:55.193480 kubelet[2456]: E0116 21:20:55.188965 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:55.191000 audit[2489]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:20:55.191000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb7c1be80 a2=0 a3=0 items=0 ppid=2456 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:20:55.197000 audit[2486]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:55.197000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfeb810b0 a2=0 a3=0 items=0 ppid=2456 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 21:20:55.201000 audit[2490]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:55.201000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffebcd1100 a2=0 a3=0 items=0 ppid=2456 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 21:20:55.205000 audit[2491]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:20:55.205000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe96041ed0 a2=0 a3=0 items=0 ppid=2456 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 21:20:55.207059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 21:20:55.213013 kubelet[2456]: E0116 21:20:55.212931 2456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:20:55.217626 kubelet[2456]: E0116 21:20:55.217486 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Jan 16 21:20:55.218024 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 21:20:55.237771 kubelet[2456]: I0116 21:20:55.237329 2456 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:20:55.237771 kubelet[2456]: I0116 21:20:55.237676 2456 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:20:55.237771 kubelet[2456]: I0116 21:20:55.237694 2456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:20:55.239032 kubelet[2456]: I0116 21:20:55.238977 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:20:55.248911 kubelet[2456]: E0116 21:20:55.248737 2456 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:20:55.248911 kubelet[2456]: E0116 21:20:55.248789 2456 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 16 21:20:55.317518 kubelet[2456]: I0116 21:20:55.317410 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:20:55.318357 kubelet[2456]: I0116 21:20:55.317607 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:20:55.318357 kubelet[2456]: I0116 21:20:55.317641 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:20:55.318357 kubelet[2456]: I0116 21:20:55.317665 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:20:55.318357 kubelet[2456]: I0116 21:20:55.317687 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:20:55.318357 kubelet[2456]: I0116 21:20:55.317710 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:20:55.318467 kubelet[2456]: I0116 21:20:55.317733 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:20:55.318467 kubelet[2456]: I0116 21:20:55.317755 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:20:55.318467 kubelet[2456]: I0116 21:20:55.317780 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 16 21:20:55.328795 systemd[1]: Created slice kubepods-burstable-pod44233b9e9eda92bbdac8cb431fa182b5.slice - libcontainer container kubepods-burstable-pod44233b9e9eda92bbdac8cb431fa182b5.slice. Jan 16 21:20:55.339773 kubelet[2456]: I0116 21:20:55.339651 2456 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:20:55.340862 kubelet[2456]: E0116 21:20:55.340361 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 16 21:20:55.353496 kubelet[2456]: E0116 21:20:55.353367 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:55.362267 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 16 21:20:55.384860 kubelet[2456]: E0116 21:20:55.383792 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:55.391518 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 16 21:20:55.398047 kubelet[2456]: E0116 21:20:55.397626 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:55.545961 kubelet[2456]: I0116 21:20:55.545913 2456 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:20:55.547320 kubelet[2456]: E0116 21:20:55.547228 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 16 21:20:55.619532 kubelet[2456]: E0116 21:20:55.619422 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Jan 16 21:20:55.655761 kubelet[2456]: E0116 21:20:55.655230 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:55.656072 containerd[1596]: time="2026-01-16T21:20:55.655989560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:44233b9e9eda92bbdac8cb431fa182b5,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:55.685237 kubelet[2456]: E0116 21:20:55.685045 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:55.693530 containerd[1596]: time="2026-01-16T21:20:55.692891348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:55.699267 kubelet[2456]: E0116 21:20:55.698350 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:55.699392 containerd[1596]: time="2026-01-16T21:20:55.699300218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 16 21:20:55.731865 containerd[1596]: time="2026-01-16T21:20:55.731759037Z" level=info msg="connecting to shim 5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5" address="unix:///run/containerd/s/289c9d1d89e7dfacca275ffdf46c9883aa4e7b61f5cefe214d2f6b89d775ff2f" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:20:55.797692 containerd[1596]: time="2026-01-16T21:20:55.797342428Z" level=info msg="connecting to shim 02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46" address="unix:///run/containerd/s/8eadb7ca00268012ab137dcdc2136cb44241cb78ca7cbf401e379ca5005d8cc1" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:20:55.805472 containerd[1596]: time="2026-01-16T21:20:55.805351175Z" level=info msg="connecting to shim 60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed" address="unix:///run/containerd/s/4187756f5fe6f9f845628ebf07d0f425346dc1dac9fc2e798267cc37ba0a4910" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:20:55.834751 systemd[1]: Started cri-containerd-5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5.scope - libcontainer container 5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5. Jan 16 21:20:55.855878 systemd[1]: Started cri-containerd-02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46.scope - libcontainer container 02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46. Jan 16 21:20:55.867000 audit: BPF prog-id=83 op=LOAD Jan 16 21:20:55.873000 audit: BPF prog-id=84 op=LOAD Jan 16 21:20:55.873000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.873000 audit: BPF prog-id=84 op=UNLOAD Jan 16 21:20:55.873000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.874000 audit: BPF prog-id=85 op=LOAD Jan 16 21:20:55.874000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.875000 audit: BPF prog-id=86 op=LOAD Jan 16 21:20:55.875000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.875000 audit: BPF prog-id=86 op=UNLOAD Jan 16 21:20:55.875000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.875000 audit: BPF prog-id=85 op=UNLOAD Jan 16 21:20:55.875000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.875000 audit: BPF prog-id=87 op=LOAD Jan 16 21:20:55.875000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2500 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561613434373436643861373861383431653264316532633334643563 Jan 16 21:20:55.879723 systemd[1]: Started cri-containerd-60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed.scope - libcontainer container 60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed. Jan 16 21:20:55.887175 kubelet[2456]: W0116 21:20:55.886508 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:55.887175 kubelet[2456]: E0116 21:20:55.886652 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:55.891000 audit: BPF prog-id=88 op=LOAD Jan 16 21:20:55.903000 audit: BPF prog-id=89 op=LOAD Jan 16 21:20:55.903000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.904000 audit: BPF prog-id=89 op=UNLOAD Jan 16 21:20:55.904000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.904000 audit: BPF prog-id=90 op=LOAD Jan 16 21:20:55.904000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.904000 audit: BPF prog-id=91 op=LOAD Jan 16 21:20:55.904000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.904000 audit: BPF prog-id=91 op=UNLOAD Jan 16 21:20:55.904000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.905000 audit: BPF prog-id=90 op=UNLOAD Jan 16 21:20:55.905000 audit[2553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.905000 audit: BPF prog-id=92 op=LOAD Jan 16 21:20:55.905000 audit[2553]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032626235303432336565613435653963366230353430346535353136 Jan 16 21:20:55.937000 audit: BPF prog-id=93 op=LOAD Jan 16 21:20:55.938000 audit: BPF prog-id=94 op=LOAD Jan 16 21:20:55.938000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.938000 audit: BPF prog-id=94 op=UNLOAD Jan 16 21:20:55.938000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.939000 audit: BPF prog-id=95 op=LOAD Jan 16 21:20:55.939000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.939000 audit: BPF prog-id=96 op=LOAD Jan 16 21:20:55.939000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.939000 audit: BPF prog-id=96 op=UNLOAD Jan 16 21:20:55.939000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.939000 audit: BPF prog-id=95 op=UNLOAD Jan 16 21:20:55.939000 audit[2574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.939000 audit: BPF prog-id=97 op=LOAD Jan 16 21:20:55.939000 audit[2574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2540 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:55.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633331643364313331653765383235393835333639633665623566 Jan 16 21:20:55.950341 kubelet[2456]: I0116 21:20:55.949986 2456 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:20:55.953877 kubelet[2456]: E0116 21:20:55.953245 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 16 21:20:55.986793 containerd[1596]: time="2026-01-16T21:20:55.986748049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:44233b9e9eda92bbdac8cb431fa182b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5\"" Jan 16 21:20:56.003654 kubelet[2456]: E0116 21:20:56.003055 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:56.011030 containerd[1596]: time="2026-01-16T21:20:56.010364018Z" level=info msg="CreateContainer within sandbox \"5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 21:20:56.034321 containerd[1596]: time="2026-01-16T21:20:56.034038406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46\"" Jan 16 21:20:56.039243 kubelet[2456]: E0116 21:20:56.038328 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:56.042180 containerd[1596]: time="2026-01-16T21:20:56.041797674Z" level=info msg="CreateContainer within sandbox \"02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 21:20:56.092674 containerd[1596]: time="2026-01-16T21:20:56.091882292Z" level=info msg="Container 97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:56.105790 kubelet[2456]: W0116 21:20:56.103633 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:56.105790 kubelet[2456]: E0116 21:20:56.103714 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:56.162667 containerd[1596]: time="2026-01-16T21:20:56.162616501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed\"" Jan 16 21:20:56.164458 kubelet[2456]: E0116 21:20:56.164427 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:56.172738 containerd[1596]: time="2026-01-16T21:20:56.171801701Z" level=info msg="Container 1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:56.176066 containerd[1596]: time="2026-01-16T21:20:56.175191112Z" level=info msg="CreateContainer within sandbox \"60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 21:20:56.178231 containerd[1596]: time="2026-01-16T21:20:56.178033975Z" level=info msg="CreateContainer within sandbox \"5aa44746d8a78a841e2d1e2c34d5c7444a7d1888bf0391ef7bd157b9f01f91b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843\"" Jan 16 21:20:56.187841 containerd[1596]: time="2026-01-16T21:20:56.178967047Z" level=info msg="StartContainer for \"97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843\"" Jan 16 21:20:56.220846 containerd[1596]: time="2026-01-16T21:20:56.216940548Z" level=info msg="connecting to shim 97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843" address="unix:///run/containerd/s/289c9d1d89e7dfacca275ffdf46c9883aa4e7b61f5cefe214d2f6b89d775ff2f" protocol=ttrpc version=3 Jan 16 21:20:56.229035 kubelet[2456]: W0116 21:20:56.228979 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:56.229479 kubelet[2456]: E0116 21:20:56.229453 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:56.309444 containerd[1596]: time="2026-01-16T21:20:56.293974553Z" level=info msg="CreateContainer within sandbox \"02bb50423eea45e9c6b05404e5516945ce7afbe46d7fe4082a4023affc29fd46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7\"" Jan 16 21:20:56.309444 containerd[1596]: time="2026-01-16T21:20:56.307652976Z" level=info msg="StartContainer for \"1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7\"" Jan 16 21:20:56.339247 containerd[1596]: time="2026-01-16T21:20:56.333951470Z" level=info msg="Container b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:20:56.339247 containerd[1596]: time="2026-01-16T21:20:56.334261749Z" level=info msg="connecting to shim 1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7" address="unix:///run/containerd/s/8eadb7ca00268012ab137dcdc2136cb44241cb78ca7cbf401e379ca5005d8cc1" protocol=ttrpc version=3 Jan 16 21:20:56.379643 containerd[1596]: time="2026-01-16T21:20:56.379046417Z" level=info msg="CreateContainer within sandbox \"60c31d3d131e7e825985369c6eb5f4b818271962d1577a6c0bdd6c45314016ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708\"" Jan 16 21:20:56.387918 containerd[1596]: time="2026-01-16T21:20:56.386946655Z" level=info msg="StartContainer for \"b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708\"" Jan 16 21:20:56.391879 containerd[1596]: time="2026-01-16T21:20:56.391509261Z" level=info msg="connecting to shim b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708" address="unix:///run/containerd/s/4187756f5fe6f9f845628ebf07d0f425346dc1dac9fc2e798267cc37ba0a4910" protocol=ttrpc version=3 Jan 16 21:20:56.430323 kubelet[2456]: E0116 21:20:56.430162 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" Jan 16 21:20:56.432451 systemd[1]: Started cri-containerd-97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843.scope - libcontainer container 97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843. Jan 16 21:20:56.448502 kubelet[2456]: W0116 21:20:56.448294 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jan 16 21:20:56.449472 kubelet[2456]: E0116 21:20:56.448521 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jan 16 21:20:56.463523 systemd[1]: Started cri-containerd-b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708.scope - libcontainer container b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708. Jan 16 21:20:56.517000 audit: BPF prog-id=98 op=LOAD Jan 16 21:20:56.518000 audit: BPF prog-id=99 op=LOAD Jan 16 21:20:56.518000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.520000 audit: BPF prog-id=99 op=UNLOAD Jan 16 21:20:56.520000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.521000 audit: BPF prog-id=100 op=LOAD Jan 16 21:20:56.521000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.522000 audit: BPF prog-id=101 op=LOAD Jan 16 21:20:56.522000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.522000 audit: BPF prog-id=101 op=UNLOAD Jan 16 21:20:56.522000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.522000 audit: BPF prog-id=100 op=UNLOAD Jan 16 21:20:56.522000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.523000 audit: BPF prog-id=102 op=LOAD Jan 16 21:20:56.523000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2500 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937646264623236306264633564306230346463343266343665656663 Jan 16 21:20:56.535768 systemd[1]: Started cri-containerd-1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7.scope - libcontainer container 1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7. Jan 16 21:20:56.541000 audit: BPF prog-id=103 op=LOAD Jan 16 21:20:56.544000 audit: BPF prog-id=104 op=LOAD Jan 16 21:20:56.544000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.544000 audit: BPF prog-id=104 op=UNLOAD Jan 16 21:20:56.544000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.546000 audit: BPF prog-id=105 op=LOAD Jan 16 21:20:56.546000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.548000 audit: BPF prog-id=106 op=LOAD Jan 16 21:20:56.548000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.549000 audit: BPF prog-id=106 op=UNLOAD Jan 16 21:20:56.549000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.549000 audit: BPF prog-id=105 op=UNLOAD Jan 16 21:20:56.549000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.549000 audit: BPF prog-id=107 op=LOAD Jan 16 21:20:56.549000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2540 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303334386133333931306561353433623731363630313439623262 Jan 16 21:20:56.583000 audit: BPF prog-id=108 op=LOAD Jan 16 21:20:56.585000 audit: BPF prog-id=109 op=LOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=109 op=UNLOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=110 op=LOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=111 op=LOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=111 op=UNLOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=110 op=UNLOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.585000 audit: BPF prog-id=112 op=LOAD Jan 16 21:20:56.585000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2527 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:20:56.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343435613063373130356136653762616365323866396232346537 Jan 16 21:20:56.659017 containerd[1596]: time="2026-01-16T21:20:56.658125606Z" level=info msg="StartContainer for \"b70348a33910ea543b71660149b2bc944f5dd7d4befbebffec86d121e77d1708\" returns successfully" Jan 16 21:20:56.678709 containerd[1596]: time="2026-01-16T21:20:56.678263462Z" level=info msg="StartContainer for \"97dbdb260bdc5d0b04dc42f46eefca21614f54cdb07f6f1ba3bc615109668843\" returns successfully" Jan 16 21:20:56.723642 containerd[1596]: time="2026-01-16T21:20:56.723396537Z" level=info msg="StartContainer for \"1e445a0c7105a6e7bace28f9b24e7c0a98a2fdfb7920276fcde57f95425ce1a7\" returns successfully" Jan 16 21:20:56.757194 kubelet[2456]: I0116 21:20:56.756508 2456 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:20:56.757928 kubelet[2456]: E0116 21:20:56.757887 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jan 16 21:20:57.317169 kubelet[2456]: E0116 21:20:57.317001 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:57.321234 kubelet[2456]: E0116 21:20:57.318413 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:57.322300 kubelet[2456]: E0116 21:20:57.322279 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:57.322501 kubelet[2456]: E0116 21:20:57.322484 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:57.330216 kubelet[2456]: E0116 21:20:57.330040 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:57.331158 kubelet[2456]: E0116 21:20:57.330471 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:58.339813 kubelet[2456]: E0116 21:20:58.339694 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:58.340610 kubelet[2456]: E0116 21:20:58.340186 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:58.340803 kubelet[2456]: E0116 21:20:58.340653 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:58.340803 kubelet[2456]: E0116 21:20:58.340795 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:58.341975 kubelet[2456]: E0116 21:20:58.341451 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:58.343840 kubelet[2456]: E0116 21:20:58.343314 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:58.368796 kubelet[2456]: I0116 21:20:58.368694 2456 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:20:59.344352 kubelet[2456]: E0116 21:20:59.344057 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:59.347868 kubelet[2456]: E0116 21:20:59.345251 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:20:59.348388 kubelet[2456]: E0116 21:20:59.348200 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:20:59.349229 kubelet[2456]: E0116 21:20:59.349162 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:00.200849 kubelet[2456]: E0116 21:21:00.200297 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 16 21:21:00.200849 kubelet[2456]: E0116 21:21:00.200705 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:01.510834 kubelet[2456]: E0116 21:21:01.508270 2456 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 16 21:21:01.578683 kubelet[2456]: I0116 21:21:01.578300 2456 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 16 21:21:01.578683 kubelet[2456]: E0116 21:21:01.578398 2456 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 16 21:21:01.616663 kubelet[2456]: I0116 21:21:01.616493 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:01.635685 kubelet[2456]: E0116 21:21:01.634242 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:01.635685 kubelet[2456]: I0116 21:21:01.634272 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:01.637485 kubelet[2456]: E0116 21:21:01.637297 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:01.637485 kubelet[2456]: I0116 21:21:01.637370 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:01.642332 kubelet[2456]: E0116 21:21:01.641816 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:01.963930 kubelet[2456]: I0116 21:21:01.963664 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:02.012287 kubelet[2456]: E0116 21:21:02.010964 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:02.013026 kubelet[2456]: E0116 21:21:02.012846 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:02.162464 kubelet[2456]: I0116 21:21:02.157921 2456 apiserver.go:52] "Watching apiserver" Jan 16 21:21:02.220159 kubelet[2456]: I0116 21:21:02.218275 2456 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:21:03.648019 kubelet[2456]: I0116 21:21:03.647846 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:03.690481 kubelet[2456]: E0116 21:21:03.690380 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:04.660316 kubelet[2456]: E0116 21:21:04.659871 2456 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:05.239692 kubelet[2456]: I0116 21:21:05.239322 2456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.238320317 podStartE2EDuration="2.238320317s" podCreationTimestamp="2026-01-16 21:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:05.235667892 +0000 UTC m=+11.073691787" watchObservedRunningTime="2026-01-16 21:21:05.238320317 +0000 UTC m=+11.076344192" Jan 16 21:21:07.834057 systemd[1]: Reload requested from client PID 2738 ('systemctl') (unit session-8.scope)... Jan 16 21:21:07.834447 systemd[1]: Reloading... Jan 16 21:21:08.063215 zram_generator::config[2787]: No configuration found. Jan 16 21:21:08.645584 systemd[1]: Reloading finished in 808 ms. Jan 16 21:21:08.717931 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:21:08.737742 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 21:21:08.738481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:21:08.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:08.739584 systemd[1]: kubelet.service: Consumed 2.589s CPU time, 132M memory peak. Jan 16 21:21:08.744913 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 16 21:21:08.744998 kernel: audit: type=1131 audit(1768598468.738:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:08.747273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 21:21:08.746000 audit: BPF prog-id=113 op=LOAD Jan 16 21:21:08.746000 audit: BPF prog-id=114 op=LOAD Jan 16 21:21:08.777416 kernel: audit: type=1334 audit(1768598468.746:392): prog-id=113 op=LOAD Jan 16 21:21:08.778506 kernel: audit: type=1334 audit(1768598468.746:393): prog-id=114 op=LOAD Jan 16 21:21:08.778602 kernel: audit: type=1334 audit(1768598468.746:394): prog-id=81 op=UNLOAD Jan 16 21:21:08.746000 audit: BPF prog-id=81 op=UNLOAD Jan 16 21:21:08.781043 kernel: audit: type=1334 audit(1768598468.746:395): prog-id=82 op=UNLOAD Jan 16 21:21:08.746000 audit: BPF prog-id=82 op=UNLOAD Jan 16 21:21:08.788173 kernel: audit: type=1334 audit(1768598468.748:396): prog-id=115 op=LOAD Jan 16 21:21:08.748000 audit: BPF prog-id=115 op=LOAD Jan 16 21:21:08.748000 audit: BPF prog-id=67 op=UNLOAD Jan 16 21:21:08.796748 kernel: audit: type=1334 audit(1768598468.748:397): prog-id=67 op=UNLOAD Jan 16 21:21:08.796861 kernel: audit: type=1334 audit(1768598468.748:398): prog-id=116 op=LOAD Jan 16 21:21:08.748000 audit: BPF prog-id=116 op=LOAD Jan 16 21:21:08.800427 kernel: audit: type=1334 audit(1768598468.748:399): prog-id=117 op=LOAD Jan 16 21:21:08.748000 audit: BPF prog-id=117 op=LOAD Jan 16 21:21:08.804129 kernel: audit: type=1334 audit(1768598468.748:400): prog-id=68 op=UNLOAD Jan 16 21:21:08.748000 audit: BPF prog-id=68 op=UNLOAD Jan 16 21:21:08.748000 audit: BPF prog-id=69 op=UNLOAD Jan 16 21:21:08.752000 audit: BPF prog-id=118 op=LOAD Jan 16 21:21:08.752000 audit: BPF prog-id=77 op=UNLOAD Jan 16 21:21:08.752000 audit: BPF prog-id=119 op=LOAD Jan 16 21:21:08.753000 audit: BPF prog-id=120 op=LOAD Jan 16 21:21:08.753000 audit: BPF prog-id=78 op=UNLOAD Jan 16 21:21:08.753000 audit: BPF prog-id=79 op=UNLOAD Jan 16 21:21:08.758000 audit: BPF prog-id=121 op=LOAD Jan 16 21:21:08.758000 audit: BPF prog-id=76 op=UNLOAD Jan 16 21:21:08.762000 audit: BPF prog-id=122 op=LOAD Jan 16 21:21:08.762000 audit: BPF prog-id=73 op=UNLOAD Jan 16 21:21:08.762000 audit: BPF prog-id=123 op=LOAD Jan 16 21:21:08.762000 audit: BPF prog-id=124 op=LOAD Jan 16 21:21:08.762000 audit: BPF prog-id=74 op=UNLOAD Jan 16 21:21:08.762000 audit: BPF prog-id=75 op=UNLOAD Jan 16 21:21:08.773000 audit: BPF prog-id=125 op=LOAD Jan 16 21:21:08.773000 audit: BPF prog-id=66 op=UNLOAD Jan 16 21:21:08.778000 audit: BPF prog-id=126 op=LOAD Jan 16 21:21:08.778000 audit: BPF prog-id=80 op=UNLOAD Jan 16 21:21:08.783000 audit: BPF prog-id=127 op=LOAD Jan 16 21:21:08.783000 audit: BPF prog-id=70 op=UNLOAD Jan 16 21:21:08.783000 audit: BPF prog-id=128 op=LOAD Jan 16 21:21:08.783000 audit: BPF prog-id=129 op=LOAD Jan 16 21:21:08.783000 audit: BPF prog-id=71 op=UNLOAD Jan 16 21:21:08.783000 audit: BPF prog-id=72 op=UNLOAD Jan 16 21:21:08.788000 audit: BPF prog-id=130 op=LOAD Jan 16 21:21:08.788000 audit: BPF prog-id=63 op=UNLOAD Jan 16 21:21:08.788000 audit: BPF prog-id=131 op=LOAD Jan 16 21:21:08.788000 audit: BPF prog-id=132 op=LOAD Jan 16 21:21:08.788000 audit: BPF prog-id=64 op=UNLOAD Jan 16 21:21:08.788000 audit: BPF prog-id=65 op=UNLOAD Jan 16 21:21:09.095588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 21:21:09.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:09.112607 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 21:21:09.249119 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:21:09.249119 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 21:21:09.249119 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 21:21:09.249742 kubelet[2829]: I0116 21:21:09.249260 2829 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 21:21:09.272672 kubelet[2829]: I0116 21:21:09.272442 2829 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 21:21:09.272672 kubelet[2829]: I0116 21:21:09.272632 2829 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 21:21:09.273001 kubelet[2829]: I0116 21:21:09.272895 2829 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 21:21:09.274458 kubelet[2829]: I0116 21:21:09.274212 2829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 21:21:09.278293 kubelet[2829]: I0116 21:21:09.277279 2829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 21:21:09.298335 kubelet[2829]: I0116 21:21:09.298047 2829 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 21:21:09.319180 kubelet[2829]: I0116 21:21:09.319026 2829 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 21:21:09.322263 kubelet[2829]: I0116 21:21:09.319404 2829 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 21:21:09.322263 kubelet[2829]: I0116 21:21:09.319447 2829 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 21:21:09.322263 kubelet[2829]: I0116 21:21:09.321230 2829 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 21:21:09.322263 kubelet[2829]: I0116 21:21:09.321243 2829 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 21:21:09.322655 kubelet[2829]: I0116 21:21:09.321472 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:21:09.322655 kubelet[2829]: I0116 21:21:09.322272 2829 kubelet.go:446] "Attempting to sync node with API server" Jan 16 21:21:09.322655 kubelet[2829]: I0116 21:21:09.322302 2829 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 21:21:09.322655 kubelet[2829]: I0116 21:21:09.322332 2829 kubelet.go:352] "Adding apiserver pod source" Jan 16 21:21:09.322655 kubelet[2829]: I0116 21:21:09.322348 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 21:21:09.327699 kubelet[2829]: I0116 21:21:09.326834 2829 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 21:21:09.328380 kubelet[2829]: I0116 21:21:09.328169 2829 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 21:21:09.331331 kubelet[2829]: I0116 21:21:09.331243 2829 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 21:21:09.331331 kubelet[2829]: I0116 21:21:09.331282 2829 server.go:1287] "Started kubelet" Jan 16 21:21:09.332642 kubelet[2829]: I0116 21:21:09.332340 2829 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 21:21:09.332811 kubelet[2829]: I0116 21:21:09.332616 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 21:21:09.332959 kubelet[2829]: I0116 21:21:09.332937 2829 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 21:21:09.334051 kubelet[2829]: I0116 21:21:09.333875 2829 server.go:479] "Adding debug handlers to kubelet server" Jan 16 21:21:09.335905 kubelet[2829]: I0116 21:21:09.335884 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 21:21:09.342252 kubelet[2829]: I0116 21:21:09.341980 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 21:21:09.356781 kubelet[2829]: I0116 21:21:09.353456 2829 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 21:21:09.356781 kubelet[2829]: E0116 21:21:09.353753 2829 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 16 21:21:09.356781 kubelet[2829]: I0116 21:21:09.354167 2829 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 21:21:09.356781 kubelet[2829]: I0116 21:21:09.354330 2829 reconciler.go:26] "Reconciler: start to sync state" Jan 16 21:21:09.383062 kubelet[2829]: E0116 21:21:09.382940 2829 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 21:21:09.394398 kubelet[2829]: I0116 21:21:09.393891 2829 factory.go:221] Registration of the containerd container factory successfully Jan 16 21:21:09.396153 kubelet[2829]: I0116 21:21:09.395865 2829 factory.go:221] Registration of the systemd container factory successfully Jan 16 21:21:09.396270 kubelet[2829]: I0116 21:21:09.396070 2829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 21:21:09.421903 kubelet[2829]: I0116 21:21:09.421056 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 21:21:09.441241 kubelet[2829]: I0116 21:21:09.440872 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 21:21:09.441241 kubelet[2829]: I0116 21:21:09.440974 2829 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 21:21:09.441241 kubelet[2829]: I0116 21:21:09.441008 2829 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 21:21:09.441241 kubelet[2829]: I0116 21:21:09.441019 2829 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 21:21:09.441241 kubelet[2829]: E0116 21:21:09.441217 2829 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 21:21:09.542977 kubelet[2829]: E0116 21:21:09.541505 2829 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 21:21:09.588392 kubelet[2829]: I0116 21:21:09.587999 2829 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 21:21:09.588392 kubelet[2829]: I0116 21:21:09.588268 2829 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 21:21:09.588392 kubelet[2829]: I0116 21:21:09.588294 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588601 2829 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588617 2829 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588645 2829 policy_none.go:49] "None policy: Start" Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588657 2829 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588671 2829 state_mem.go:35] "Initializing new in-memory state store" Jan 16 21:21:09.588973 kubelet[2829]: I0116 21:21:09.588808 2829 state_mem.go:75] "Updated machine memory state" Jan 16 21:21:09.617224 kubelet[2829]: I0116 21:21:09.616772 2829 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 21:21:09.621179 kubelet[2829]: I0116 21:21:09.620891 2829 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 21:21:09.621179 kubelet[2829]: I0116 21:21:09.620949 2829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 21:21:09.621620 kubelet[2829]: I0116 21:21:09.621324 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 21:21:09.626686 kubelet[2829]: E0116 21:21:09.626397 2829 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 21:21:09.746895 kubelet[2829]: I0116 21:21:09.746018 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:09.753999 kubelet[2829]: I0116 21:21:09.753426 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.759194 kubelet[2829]: I0116 21:21:09.758359 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:09.763635 kubelet[2829]: I0116 21:21:09.763597 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.765875 kubelet[2829]: I0116 21:21:09.765751 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.768187 kubelet[2829]: I0116 21:21:09.766493 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:09.768381 kubelet[2829]: I0116 21:21:09.768354 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.768490 kubelet[2829]: I0116 21:21:09.768470 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.768810 kubelet[2829]: I0116 21:21:09.768784 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 16 21:21:09.768931 kubelet[2829]: I0116 21:21:09.768908 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:09.769929 kubelet[2829]: I0116 21:21:09.769811 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:09.769929 kubelet[2829]: I0116 21:21:09.769894 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44233b9e9eda92bbdac8cb431fa182b5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"44233b9e9eda92bbdac8cb431fa182b5\") " pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:09.788254 kubelet[2829]: I0116 21:21:09.787982 2829 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 16 21:21:09.800237 kubelet[2829]: E0116 21:21:09.799701 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 16 21:21:09.823575 kubelet[2829]: I0116 21:21:09.823378 2829 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 16 21:21:09.823575 kubelet[2829]: I0116 21:21:09.823576 2829 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 16 21:21:10.103369 kubelet[2829]: E0116 21:21:10.102695 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:10.103369 kubelet[2829]: E0116 21:21:10.102764 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:10.103369 kubelet[2829]: E0116 21:21:10.102971 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:10.332877 kubelet[2829]: I0116 21:21:10.323995 2829 apiserver.go:52] "Watching apiserver" Jan 16 21:21:10.358317 kubelet[2829]: I0116 21:21:10.354397 2829 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 21:21:10.433194 kubelet[2829]: I0116 21:21:10.432387 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.431463446 podStartE2EDuration="1.431463446s" podCreationTimestamp="2026-01-16 21:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:10.405221299 +0000 UTC m=+1.282252315" watchObservedRunningTime="2026-01-16 21:21:10.431463446 +0000 UTC m=+1.308494451" Jan 16 21:21:10.433194 kubelet[2829]: I0116 21:21:10.432745 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.432732118 podStartE2EDuration="1.432732118s" podCreationTimestamp="2026-01-16 21:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:10.432653713 +0000 UTC m=+1.309684719" watchObservedRunningTime="2026-01-16 21:21:10.432732118 +0000 UTC m=+1.309763124" Jan 16 21:21:10.543943 kubelet[2829]: E0116 21:21:10.542234 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:10.544935 kubelet[2829]: E0116 21:21:10.544617 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:10.547590 kubelet[2829]: I0116 21:21:10.546668 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:10.576465 kubelet[2829]: E0116 21:21:10.576279 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 16 21:21:10.576465 kubelet[2829]: E0116 21:21:10.576412 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:11.546754 kubelet[2829]: E0116 21:21:11.546655 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:11.547774 kubelet[2829]: E0116 21:21:11.547686 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:12.353434 kubelet[2829]: I0116 21:21:12.353381 2829 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 21:21:12.354695 containerd[1596]: time="2026-01-16T21:21:12.354389745Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 21:21:12.355376 kubelet[2829]: I0116 21:21:12.355050 2829 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 21:21:12.429826 kubelet[2829]: W0116 21:21:12.429740 2829 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 16 21:21:12.429955 kubelet[2829]: E0116 21:21:12.429835 2829 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 16 21:21:12.433222 kubelet[2829]: I0116 21:21:12.432198 2829 status_manager.go:890] "Failed to get status for pod" podUID="e60e767a-0858-4fe4-8ac6-036b30715f1c" pod="kube-system/kube-proxy-g84kh" err="pods \"kube-proxy-g84kh\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 16 21:21:12.433222 kubelet[2829]: W0116 21:21:12.432421 2829 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 16 21:21:12.433222 kubelet[2829]: E0116 21:21:12.432451 2829 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 16 21:21:12.436764 systemd[1]: Created slice kubepods-besteffort-pode60e767a_0858_4fe4_8ac6_036b30715f1c.slice - libcontainer container kubepods-besteffort-pode60e767a_0858_4fe4_8ac6_036b30715f1c.slice. Jan 16 21:21:12.513653 kubelet[2829]: I0116 21:21:12.512924 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e60e767a-0858-4fe4-8ac6-036b30715f1c-lib-modules\") pod \"kube-proxy-g84kh\" (UID: \"e60e767a-0858-4fe4-8ac6-036b30715f1c\") " pod="kube-system/kube-proxy-g84kh" Jan 16 21:21:12.514708 kubelet[2829]: I0116 21:21:12.513911 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-proxy\") pod \"kube-proxy-g84kh\" (UID: \"e60e767a-0858-4fe4-8ac6-036b30715f1c\") " pod="kube-system/kube-proxy-g84kh" Jan 16 21:21:12.514708 kubelet[2829]: I0116 21:21:12.513996 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e60e767a-0858-4fe4-8ac6-036b30715f1c-xtables-lock\") pod \"kube-proxy-g84kh\" (UID: \"e60e767a-0858-4fe4-8ac6-036b30715f1c\") " pod="kube-system/kube-proxy-g84kh" Jan 16 21:21:12.514708 kubelet[2829]: I0116 21:21:12.514023 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjp5\" (UniqueName: \"kubernetes.io/projected/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-api-access-tsjp5\") pod \"kube-proxy-g84kh\" (UID: \"e60e767a-0858-4fe4-8ac6-036b30715f1c\") " pod="kube-system/kube-proxy-g84kh" Jan 16 21:21:13.049482 kubelet[2829]: E0116 21:21:13.048773 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:13.564806 kubelet[2829]: E0116 21:21:13.564649 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:13.566397 systemd[1]: Created slice kubepods-besteffort-pod99c45860_8a39_4474_9c73_f4039a0ca9eb.slice - libcontainer container kubepods-besteffort-pod99c45860_8a39_4474_9c73_f4039a0ca9eb.slice. Jan 16 21:21:13.619354 kubelet[2829]: E0116 21:21:13.618613 2829 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 16 21:21:13.619354 kubelet[2829]: E0116 21:21:13.618761 2829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-proxy podName:e60e767a-0858-4fe4-8ac6-036b30715f1c nodeName:}" failed. No retries permitted until 2026-01-16 21:21:14.118734893 +0000 UTC m=+4.995765899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-proxy") pod "kube-proxy-g84kh" (UID: "e60e767a-0858-4fe4-8ac6-036b30715f1c") : failed to sync configmap cache: timed out waiting for the condition Jan 16 21:21:13.635426 kubelet[2829]: I0116 21:21:13.634947 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99c45860-8a39-4474-9c73-f4039a0ca9eb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r2br7\" (UID: \"99c45860-8a39-4474-9c73-f4039a0ca9eb\") " pod="tigera-operator/tigera-operator-7dcd859c48-r2br7" Jan 16 21:21:13.635426 kubelet[2829]: I0116 21:21:13.635281 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljz6\" (UniqueName: \"kubernetes.io/projected/99c45860-8a39-4474-9c73-f4039a0ca9eb-kube-api-access-4ljz6\") pod \"tigera-operator-7dcd859c48-r2br7\" (UID: \"99c45860-8a39-4474-9c73-f4039a0ca9eb\") " pod="tigera-operator/tigera-operator-7dcd859c48-r2br7" Jan 16 21:21:13.640776 kubelet[2829]: E0116 21:21:13.640701 2829 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 16 21:21:13.640776 kubelet[2829]: E0116 21:21:13.640770 2829 projected.go:194] Error preparing data for projected volume kube-api-access-tsjp5 for pod kube-system/kube-proxy-g84kh: failed to sync configmap cache: timed out waiting for the condition Jan 16 21:21:13.640987 kubelet[2829]: E0116 21:21:13.640833 2829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-api-access-tsjp5 podName:e60e767a-0858-4fe4-8ac6-036b30715f1c nodeName:}" failed. No retries permitted until 2026-01-16 21:21:14.140816947 +0000 UTC m=+5.017847953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tsjp5" (UniqueName: "kubernetes.io/projected/e60e767a-0858-4fe4-8ac6-036b30715f1c-kube-api-access-tsjp5") pod "kube-proxy-g84kh" (UID: "e60e767a-0858-4fe4-8ac6-036b30715f1c") : failed to sync configmap cache: timed out waiting for the condition Jan 16 21:21:13.882829 containerd[1596]: time="2026-01-16T21:21:13.882417338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r2br7,Uid:99c45860-8a39-4474-9c73-f4039a0ca9eb,Namespace:tigera-operator,Attempt:0,}" Jan 16 21:21:14.097063 containerd[1596]: time="2026-01-16T21:21:14.096860945Z" level=info msg="connecting to shim 79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf" address="unix:///run/containerd/s/db4852d2ba57862dabb9e7958815b0fba19499d063cb8ea7e3b571209fa31190" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:14.250853 kubelet[2829]: E0116 21:21:14.250325 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.263014 containerd[1596]: time="2026-01-16T21:21:14.262843515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g84kh,Uid:e60e767a-0858-4fe4-8ac6-036b30715f1c,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:14.290809 systemd[1]: Started cri-containerd-79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf.scope - libcontainer container 79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf. Jan 16 21:21:14.359000 audit: BPF prog-id=133 op=LOAD Jan 16 21:21:14.374608 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 16 21:21:14.374774 kernel: audit: type=1334 audit(1768598474.359:433): prog-id=133 op=LOAD Jan 16 21:21:14.360000 audit: BPF prog-id=134 op=LOAD Jan 16 21:21:14.392588 kernel: audit: type=1334 audit(1768598474.360:434): prog-id=134 op=LOAD Jan 16 21:21:14.392683 kernel: audit: type=1300 audit(1768598474.360:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.425810 kernel: audit: type=1327 audit(1768598474.360:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.360000 audit: BPF prog-id=134 op=UNLOAD Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.497491 containerd[1596]: time="2026-01-16T21:21:14.496953536Z" level=info msg="connecting to shim 23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6" address="unix:///run/containerd/s/1b9656a554ea80c4991fca36e1749306e8b8ee2c014fc70320a2764ff3a49d75" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:14.497615 kubelet[2829]: E0116 21:21:14.489703 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.500930 kernel: audit: type=1334 audit(1768598474.360:435): prog-id=134 op=UNLOAD Jan 16 21:21:14.502733 kernel: audit: type=1300 audit(1768598474.360:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.502781 kernel: audit: type=1327 audit(1768598474.360:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.528992 kernel: audit: type=1334 audit(1768598474.360:436): prog-id=135 op=LOAD Jan 16 21:21:14.360000 audit: BPF prog-id=135 op=LOAD Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.551465 kernel: audit: type=1300 audit(1768598474.360:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.578285 kernel: audit: type=1327 audit(1768598474.360:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.360000 audit: BPF prog-id=136 op=LOAD Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.360000 audit: BPF prog-id=136 op=UNLOAD Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.360000 audit: BPF prog-id=135 op=UNLOAD Jan 16 21:21:14.360000 audit[2903]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.361000 audit: BPF prog-id=137 op=LOAD Jan 16 21:21:14.361000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2890 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643963376532363938393237663533613562303561306130633338 Jan 16 21:21:14.582826 kubelet[2829]: E0116 21:21:14.579198 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.582826 kubelet[2829]: E0116 21:21:14.581139 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.595742 containerd[1596]: time="2026-01-16T21:21:14.595673971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r2br7,Uid:99c45860-8a39-4474-9c73-f4039a0ca9eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf\"" Jan 16 21:21:14.605666 containerd[1596]: time="2026-01-16T21:21:14.605595458Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 21:21:14.647762 systemd[1]: Started cri-containerd-23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6.scope - libcontainer container 23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6. Jan 16 21:21:14.677000 audit: BPF prog-id=138 op=LOAD Jan 16 21:21:14.679000 audit: BPF prog-id=139 op=LOAD Jan 16 21:21:14.679000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.679000 audit: BPF prog-id=139 op=UNLOAD Jan 16 21:21:14.679000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.682000 audit: BPF prog-id=140 op=LOAD Jan 16 21:21:14.682000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.683000 audit: BPF prog-id=141 op=LOAD Jan 16 21:21:14.683000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.683000 audit: BPF prog-id=141 op=UNLOAD Jan 16 21:21:14.683000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.683000 audit: BPF prog-id=140 op=UNLOAD Jan 16 21:21:14.683000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.683000 audit: BPF prog-id=142 op=LOAD Jan 16 21:21:14.683000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2933 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:14.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233353635663162623935353362353763653562636139393537386437 Jan 16 21:21:14.741650 containerd[1596]: time="2026-01-16T21:21:14.741478129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g84kh,Uid:e60e767a-0858-4fe4-8ac6-036b30715f1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6\"" Jan 16 21:21:14.743211 kubelet[2829]: E0116 21:21:14.742962 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:14.748380 containerd[1596]: time="2026-01-16T21:21:14.747829394Z" level=info msg="CreateContainer within sandbox \"23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 21:21:14.857179 containerd[1596]: time="2026-01-16T21:21:14.856309869Z" level=info msg="Container db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:14.879622 containerd[1596]: time="2026-01-16T21:21:14.879296063Z" level=info msg="CreateContainer within sandbox \"23565f1bb9553b57ce5bca99578d7a9369fca30cfc3a2a93296b70b5c400c1a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2\"" Jan 16 21:21:14.881258 containerd[1596]: time="2026-01-16T21:21:14.881036637Z" level=info msg="StartContainer for \"db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2\"" Jan 16 21:21:14.883334 containerd[1596]: time="2026-01-16T21:21:14.882979126Z" level=info msg="connecting to shim db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2" address="unix:///run/containerd/s/1b9656a554ea80c4991fca36e1749306e8b8ee2c014fc70320a2764ff3a49d75" protocol=ttrpc version=3 Jan 16 21:21:14.932622 systemd[1]: Started cri-containerd-db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2.scope - libcontainer container db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2. Jan 16 21:21:15.073000 audit: BPF prog-id=143 op=LOAD Jan 16 21:21:15.073000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2933 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462313764333337396638626233343936323061393861303331353261 Jan 16 21:21:15.073000 audit: BPF prog-id=144 op=LOAD Jan 16 21:21:15.073000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2933 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462313764333337396638626233343936323061393861303331353261 Jan 16 21:21:15.073000 audit: BPF prog-id=144 op=UNLOAD Jan 16 21:21:15.073000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462313764333337396638626233343936323061393861303331353261 Jan 16 21:21:15.074000 audit: BPF prog-id=143 op=UNLOAD Jan 16 21:21:15.074000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2933 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462313764333337396638626233343936323061393861303331353261 Jan 16 21:21:15.074000 audit: BPF prog-id=145 op=LOAD Jan 16 21:21:15.074000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2933 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462313764333337396638626233343936323061393861303331353261 Jan 16 21:21:15.135241 containerd[1596]: time="2026-01-16T21:21:15.133754117Z" level=info msg="StartContainer for \"db17d3379f8bb349620a98a03152a996415eb94b6e144b902daa78b1a4708ac2\" returns successfully" Jan 16 21:21:15.506000 audit[3040]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.506000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7e343090 a2=0 a3=7fff7e34307c items=0 ppid=2988 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:21:15.507000 audit[3041]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.507000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd64f8f10 a2=0 a3=7ffcd64f8efc items=0 ppid=2988 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 21:21:15.512000 audit[3043]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.512000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff844b0830 a2=0 a3=7fff844b081c items=0 ppid=2988 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:21:15.516000 audit[3042]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.516000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe4f1fd50 a2=0 a3=7fffe4f1fd3c items=0 ppid=2988 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 21:21:15.519000 audit[3044]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.519000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb77730e0 a2=0 a3=7ffdb77730cc items=0 ppid=2988 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:21:15.528000 audit[3045]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.528000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf56b9e10 a2=0 a3=7ffdf56b9dfc items=0 ppid=2988 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.528000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 21:21:15.591223 kubelet[2829]: E0116 21:21:15.590892 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:15.625000 audit[3047]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.625000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd6c554cb0 a2=0 a3=7ffd6c554c9c items=0 ppid=2988 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:21:15.637000 audit[3049]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.637000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe20487e90 a2=0 a3=7ffe20487e7c items=0 ppid=2988 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 21:21:15.650000 audit[3052]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.650000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe60bff880 a2=0 a3=7ffe60bff86c items=0 ppid=2988 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 21:21:15.654000 audit[3053]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.654000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6732a100 a2=0 a3=7ffc6732a0ec items=0 ppid=2988 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:21:15.664000 audit[3055]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.664000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb11e9ee0 a2=0 a3=7ffcb11e9ecc items=0 ppid=2988 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:21:15.678000 audit[3056]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.678000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc11d0ed80 a2=0 a3=7ffc11d0ed6c items=0 ppid=2988 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:21:15.689000 audit[3058]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.689000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe121e6330 a2=0 a3=7ffe121e631c items=0 ppid=2988 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:21:15.701000 audit[3061]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.701000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0fca2650 a2=0 a3=7ffc0fca263c items=0 ppid=2988 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 21:21:15.706000 audit[3062]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.706000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff876623b0 a2=0 a3=7fff8766239c items=0 ppid=2988 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:21:15.714000 audit[3064]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.714000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2e906c80 a2=0 a3=7ffd2e906c6c items=0 ppid=2988 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:21:15.718000 audit[3065]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.718000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3ea0f920 a2=0 a3=7ffc3ea0f90c items=0 ppid=2988 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:21:15.726000 audit[3067]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.726000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff12dcde50 a2=0 a3=7fff12dcde3c items=0 ppid=2988 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:21:15.737000 audit[3070]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.737000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8b7a2c90 a2=0 a3=7fff8b7a2c7c items=0 ppid=2988 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:21:15.749000 audit[3073]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.749000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe540726e0 a2=0 a3=7ffe540726cc items=0 ppid=2988 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:21:15.756000 audit[3074]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.756000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffed4816d0 a2=0 a3=7fffed4816bc items=0 ppid=2988 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:21:15.766000 audit[3076]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.766000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffa0011290 a2=0 a3=7fffa001127c items=0 ppid=2988 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:21:15.782000 audit[3079]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.782000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3a606650 a2=0 a3=7ffd3a60663c items=0 ppid=2988 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:21:15.786000 audit[3080]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.786000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff17a50f90 a2=0 a3=7fff17a50f7c items=0 ppid=2988 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:21:15.802000 audit[3082]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 21:21:15.802000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcd16850e0 a2=0 a3=7ffcd16850cc items=0 ppid=2988 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:21:15.891000 audit[3088]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:15.891000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff2c6f3350 a2=0 a3=7fff2c6f333c items=0 ppid=2988 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:15.916000 audit[3088]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:15.916000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff2c6f3350 a2=0 a3=7fff2c6f333c items=0 ppid=2988 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:15.921000 audit[3093]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.921000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffde8a13060 a2=0 a3=7ffde8a1304c items=0 ppid=2988 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 21:21:15.936000 audit[3095]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.936000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff35ca7f80 a2=0 a3=7fff35ca7f6c items=0 ppid=2988 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 21:21:15.966000 audit[3098]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.966000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd82f70400 a2=0 a3=7ffd82f703ec items=0 ppid=2988 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 21:21:15.984000 audit[3099]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.984000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffff26aad0 a2=0 a3=7fffff26aabc items=0 ppid=2988 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 21:21:15.995000 audit[3101]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:15.995000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1c2408d0 a2=0 a3=7fff1c2408bc items=0 ppid=2988 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:15.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 21:21:16.002000 audit[3102]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.002000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe322ea1d0 a2=0 a3=7ffe322ea1bc items=0 ppid=2988 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 21:21:16.022000 audit[3105]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.022000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff765aea50 a2=0 a3=7fff765aea3c items=0 ppid=2988 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 21:21:16.042000 audit[3111]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.042000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe9f5af980 a2=0 a3=7ffe9f5af96c items=0 ppid=2988 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 21:21:16.051000 audit[3112]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.051000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd755735c0 a2=0 a3=7ffd755735ac items=0 ppid=2988 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 21:21:16.063000 audit[3114]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.063000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc68d2a10 a2=0 a3=7fffc68d29fc items=0 ppid=2988 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 21:21:16.068000 audit[3115]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.068000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe991f6330 a2=0 a3=7ffe991f631c items=0 ppid=2988 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 21:21:16.079000 audit[3117]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.079000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5fa21110 a2=0 a3=7ffc5fa210fc items=0 ppid=2988 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 21:21:16.098000 audit[3120]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.098000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2f696110 a2=0 a3=7ffc2f6960fc items=0 ppid=2988 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 21:21:16.113000 audit[3123]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.113000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0d1d9700 a2=0 a3=7ffc0d1d96ec items=0 ppid=2988 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 21:21:16.120000 audit[3124]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.120000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc76314b10 a2=0 a3=7ffc76314afc items=0 ppid=2988 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 21:21:16.131207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932851400.mount: Deactivated successfully. Jan 16 21:21:16.132000 audit[3126]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.132000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff16313b60 a2=0 a3=7fff16313b4c items=0 ppid=2988 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:21:16.151000 audit[3129]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.151000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9f452420 a2=0 a3=7ffe9f45240c items=0 ppid=2988 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 21:21:16.155000 audit[3130]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.155000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd539eb30 a2=0 a3=7ffcd539eb1c items=0 ppid=2988 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 21:21:16.164000 audit[3132]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.164000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff2d35c180 a2=0 a3=7fff2d35c16c items=0 ppid=2988 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 21:21:16.171000 audit[3133]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.171000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7e7db430 a2=0 a3=7ffc7e7db41c items=0 ppid=2988 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 21:21:16.180000 audit[3135]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.180000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb61b1400 a2=0 a3=7ffdb61b13ec items=0 ppid=2988 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:21:16.204000 audit[3138]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 21:21:16.204000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd036d2f30 a2=0 a3=7ffd036d2f1c items=0 ppid=2988 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 21:21:16.223000 audit[3140]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:21:16.223000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffed907ae10 a2=0 a3=7ffed907adfc items=0 ppid=2988 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.223000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:16.224000 audit[3140]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 21:21:16.224000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffed907ae10 a2=0 a3=7ffed907adfc items=0 ppid=2988 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:16.224000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:18.185746 containerd[1596]: time="2026-01-16T21:21:18.184218478Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:18.194702 containerd[1596]: time="2026-01-16T21:21:18.191064473Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 16 21:21:18.198451 containerd[1596]: time="2026-01-16T21:21:18.197869198Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:18.217209 containerd[1596]: time="2026-01-16T21:21:18.216994040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:18.217866 containerd[1596]: time="2026-01-16T21:21:18.217715244Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.612073961s" Jan 16 21:21:18.217866 containerd[1596]: time="2026-01-16T21:21:18.217762022Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 16 21:21:18.223638 containerd[1596]: time="2026-01-16T21:21:18.221971809Z" level=info msg="CreateContainer within sandbox \"79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 21:21:18.274696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886150773.mount: Deactivated successfully. Jan 16 21:21:18.278512 containerd[1596]: time="2026-01-16T21:21:18.278351798Z" level=info msg="Container a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:18.321880 containerd[1596]: time="2026-01-16T21:21:18.321609689Z" level=info msg="CreateContainer within sandbox \"79d9c7e2698927f53a5b05a0a0c388edb2273f0c9196109075a1a5d524ac0ddf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b\"" Jan 16 21:21:18.325017 containerd[1596]: time="2026-01-16T21:21:18.324657601Z" level=info msg="StartContainer for \"a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b\"" Jan 16 21:21:18.327245 containerd[1596]: time="2026-01-16T21:21:18.325960830Z" level=info msg="connecting to shim a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b" address="unix:///run/containerd/s/db4852d2ba57862dabb9e7958815b0fba19499d063cb8ea7e3b571209fa31190" protocol=ttrpc version=3 Jan 16 21:21:18.419622 systemd[1]: Started cri-containerd-a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b.scope - libcontainer container a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b. Jan 16 21:21:18.485000 audit: BPF prog-id=146 op=LOAD Jan 16 21:21:18.486000 audit: BPF prog-id=147 op=LOAD Jan 16 21:21:18.486000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.486000 audit: BPF prog-id=147 op=UNLOAD Jan 16 21:21:18.486000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.487000 audit: BPF prog-id=148 op=LOAD Jan 16 21:21:18.487000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.487000 audit: BPF prog-id=149 op=LOAD Jan 16 21:21:18.487000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.487000 audit: BPF prog-id=149 op=UNLOAD Jan 16 21:21:18.487000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.487000 audit: BPF prog-id=148 op=UNLOAD Jan 16 21:21:18.487000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.487000 audit: BPF prog-id=150 op=LOAD Jan 16 21:21:18.487000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2890 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:18.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137623934363034653461376636323933323637313664666665333234 Jan 16 21:21:18.565515 containerd[1596]: time="2026-01-16T21:21:18.565332193Z" level=info msg="StartContainer for \"a7b94604e4a7f629326716dffe324ad9540ccadefa012efc891d9a2d141ea40b\" returns successfully" Jan 16 21:21:18.658023 kubelet[2829]: I0116 21:21:18.657752 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g84kh" podStartSLOduration=6.657679354 podStartE2EDuration="6.657679354s" podCreationTimestamp="2026-01-16 21:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:21:15.619671106 +0000 UTC m=+6.496702143" watchObservedRunningTime="2026-01-16 21:21:18.657679354 +0000 UTC m=+9.534710360" Jan 16 21:21:18.658823 kubelet[2829]: I0116 21:21:18.658182 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r2br7" podStartSLOduration=2.042303422 podStartE2EDuration="5.658071966s" podCreationTimestamp="2026-01-16 21:21:13 +0000 UTC" firstStartedPulling="2026-01-16 21:21:14.60343882 +0000 UTC m=+5.480469826" lastFinishedPulling="2026-01-16 21:21:18.219207363 +0000 UTC m=+9.096238370" observedRunningTime="2026-01-16 21:21:18.653717718 +0000 UTC m=+9.530748734" watchObservedRunningTime="2026-01-16 21:21:18.658071966 +0000 UTC m=+9.535102992" Jan 16 21:21:19.902684 kubelet[2829]: E0116 21:21:19.899647 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:20.626864 kubelet[2829]: E0116 21:21:20.625690 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:24.954000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:21:24.955396 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 16 21:21:24.961004 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 16 21:21:24.961225 kernel: audit: type=1106 audit(1768598484.954:513): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:21:24.967646 sshd[1815]: Connection closed by 10.0.0.1 port 46380 Jan 16 21:21:24.971519 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 16 21:21:25.001182 kernel: audit: type=1104 audit(1768598484.954:514): pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:21:24.954000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 21:21:24.987217 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Jan 16 21:21:24.990212 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:46380.service: Deactivated successfully. Jan 16 21:21:24.996929 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 21:21:24.999776 systemd[1]: session-8.scope: Consumed 8.237s CPU time, 214.9M memory peak. Jan 16 21:21:24.975000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:25.008822 systemd-logind[1575]: Removed session 8. Jan 16 21:21:24.977000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:25.048987 kernel: audit: type=1106 audit(1768598484.975:515): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:25.049248 kernel: audit: type=1104 audit(1768598484.977:516): pid=1811 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:21:25.049304 kernel: audit: type=1131 audit(1768598484.988:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:46380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:24.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.59:22-10.0.0.1:46380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:21:25.768000 audit[3236]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.789322 kernel: audit: type=1325 audit(1768598485.768:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.768000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd66c47b50 a2=0 a3=7ffd66c47b3c items=0 ppid=2988 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.822285 kernel: audit: type=1300 audit(1768598485.768:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd66c47b50 a2=0 a3=7ffd66c47b3c items=0 ppid=2988 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:25.850207 kernel: audit: type=1327 audit(1768598485.768:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:25.850323 kernel: audit: type=1325 audit(1768598485.790:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.790000 audit[3236]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.790000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd66c47b50 a2=0 a3=0 items=0 ppid=2988 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.882212 kernel: audit: type=1300 audit(1768598485.790:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd66c47b50 a2=0 a3=0 items=0 ppid=2988 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:25.899000 audit[3238]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.899000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7767f750 a2=0 a3=7fff7767f73c items=0 ppid=2988 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:25.913000 audit[3238]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:25.913000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7767f750 a2=0 a3=0 items=0 ppid=2988 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:25.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:29.788000 audit[3240]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:29.788000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffeb5a69ad0 a2=0 a3=7ffeb5a69abc items=0 ppid=2988 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:29.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:29.809000 audit[3240]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:29.809000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb5a69ad0 a2=0 a3=0 items=0 ppid=2988 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:29.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:29.963000 audit[3242]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:29.970804 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 16 21:21:29.970852 kernel: audit: type=1325 audit(1768598489.963:524): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:29.963000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff2c4bc940 a2=0 a3=7fff2c4bc92c items=0 ppid=2988 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:30.011348 kernel: audit: type=1300 audit(1768598489.963:524): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff2c4bc940 a2=0 a3=7fff2c4bc92c items=0 ppid=2988 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:29.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:30.024347 kernel: audit: type=1327 audit(1768598489.963:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:30.019000 audit[3242]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:30.037214 kernel: audit: type=1325 audit(1768598490.019:525): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:30.019000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2c4bc940 a2=0 a3=0 items=0 ppid=2988 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:30.062316 kernel: audit: type=1300 audit(1768598490.019:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2c4bc940 a2=0 a3=0 items=0 ppid=2988 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:30.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:30.077288 kernel: audit: type=1327 audit(1768598490.019:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:31.107000 audit[3244]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:31.107000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce1ccb1a0 a2=0 a3=7ffce1ccb18c items=0 ppid=2988 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:31.154838 kernel: audit: type=1325 audit(1768598491.107:526): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:31.155188 kernel: audit: type=1300 audit(1768598491.107:526): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce1ccb1a0 a2=0 a3=7ffce1ccb18c items=0 ppid=2988 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:31.155252 kernel: audit: type=1327 audit(1768598491.107:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:31.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:31.173000 audit[3244]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:31.173000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce1ccb1a0 a2=0 a3=0 items=0 ppid=2988 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:31.187217 kernel: audit: type=1325 audit(1768598491.173:527): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:31.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:32.718000 audit[3246]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:32.718000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc360e1270 a2=0 a3=7ffc360e125c items=0 ppid=2988 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:32.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:32.731000 audit[3246]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:32.731000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc360e1270 a2=0 a3=0 items=0 ppid=2988 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:32.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:32.811953 systemd[1]: Created slice kubepods-besteffort-pod65cfaeb6_56b1_4413_a2a3_7e48f39784a5.slice - libcontainer container kubepods-besteffort-pod65cfaeb6_56b1_4413_a2a3_7e48f39784a5.slice. Jan 16 21:21:32.820000 audit[3248]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:32.820000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc3e0fab60 a2=0 a3=7ffc3e0fab4c items=0 ppid=2988 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:32.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:32.828000 audit[3248]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:32.828000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3e0fab60 a2=0 a3=0 items=0 ppid=2988 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:32.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:32.889389 kubelet[2829]: I0116 21:21:32.888738 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65cfaeb6-56b1-4413-a2a3-7e48f39784a5-typha-certs\") pod \"calico-typha-57c7d5b58d-nzvq5\" (UID: \"65cfaeb6-56b1-4413-a2a3-7e48f39784a5\") " pod="calico-system/calico-typha-57c7d5b58d-nzvq5" Jan 16 21:21:32.889389 kubelet[2829]: I0116 21:21:32.888850 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgrp\" (UniqueName: \"kubernetes.io/projected/65cfaeb6-56b1-4413-a2a3-7e48f39784a5-kube-api-access-xwgrp\") pod \"calico-typha-57c7d5b58d-nzvq5\" (UID: \"65cfaeb6-56b1-4413-a2a3-7e48f39784a5\") " pod="calico-system/calico-typha-57c7d5b58d-nzvq5" Jan 16 21:21:32.889389 kubelet[2829]: I0116 21:21:32.888905 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cfaeb6-56b1-4413-a2a3-7e48f39784a5-tigera-ca-bundle\") pod \"calico-typha-57c7d5b58d-nzvq5\" (UID: \"65cfaeb6-56b1-4413-a2a3-7e48f39784a5\") " pod="calico-system/calico-typha-57c7d5b58d-nzvq5" Jan 16 21:21:33.081685 systemd[1]: Created slice kubepods-besteffort-pod6c88b5a1_7790_44a1_92fd_40c46ca67f7f.slice - libcontainer container kubepods-besteffort-pod6c88b5a1_7790_44a1_92fd_40c46ca67f7f.slice. Jan 16 21:21:33.120862 kubelet[2829]: E0116 21:21:33.120365 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:33.123033 containerd[1596]: time="2026-01-16T21:21:33.122927970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c7d5b58d-nzvq5,Uid:65cfaeb6-56b1-4413-a2a3-7e48f39784a5,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:33.194746 kubelet[2829]: I0116 21:21:33.194260 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-cni-log-dir\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195687 kubelet[2829]: I0116 21:21:33.194898 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-policysync\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195687 kubelet[2829]: I0116 21:21:33.194926 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-tigera-ca-bundle\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195687 kubelet[2829]: I0116 21:21:33.194946 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-cni-bin-dir\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195687 kubelet[2829]: I0116 21:21:33.194960 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjpgx\" (UniqueName: \"kubernetes.io/projected/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-kube-api-access-kjpgx\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195687 kubelet[2829]: I0116 21:21:33.194980 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-xtables-lock\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195872 kubelet[2829]: I0116 21:21:33.194994 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-cni-net-dir\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195872 kubelet[2829]: I0116 21:21:33.195006 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-lib-modules\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195872 kubelet[2829]: I0116 21:21:33.195022 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-node-certs\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195872 kubelet[2829]: I0116 21:21:33.195036 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-var-lib-calico\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195872 kubelet[2829]: I0116 21:21:33.195049 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-var-run-calico\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195969 kubelet[2829]: I0116 21:21:33.195065 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6c88b5a1-7790-44a1-92fd-40c46ca67f7f-flexvol-driver-host\") pod \"calico-node-58bkq\" (UID: \"6c88b5a1-7790-44a1-92fd-40c46ca67f7f\") " pod="calico-system/calico-node-58bkq" Jan 16 21:21:33.195969 kubelet[2829]: E0116 21:21:33.195334 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:33.215817 containerd[1596]: time="2026-01-16T21:21:33.215459940Z" level=info msg="connecting to shim b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6" address="unix:///run/containerd/s/4aa9c18cfd582cf0d821d9b14024ddf4a9da0cb665010e209f4398b1ec9e8190" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:33.306438 kubelet[2829]: I0116 21:21:33.304868 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c8c0e82-b18e-4cf2-bc74-ab0296b892f6-kubelet-dir\") pod \"csi-node-driver-4hncm\" (UID: \"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6\") " pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:33.306438 kubelet[2829]: I0116 21:21:33.304926 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gx62\" (UniqueName: \"kubernetes.io/projected/8c8c0e82-b18e-4cf2-bc74-ab0296b892f6-kube-api-access-7gx62\") pod \"csi-node-driver-4hncm\" (UID: \"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6\") " pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:33.306438 kubelet[2829]: I0116 21:21:33.305010 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8c8c0e82-b18e-4cf2-bc74-ab0296b892f6-varrun\") pod \"csi-node-driver-4hncm\" (UID: \"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6\") " pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:33.306438 kubelet[2829]: I0116 21:21:33.305059 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c8c0e82-b18e-4cf2-bc74-ab0296b892f6-registration-dir\") pod \"csi-node-driver-4hncm\" (UID: \"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6\") " pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:33.306438 kubelet[2829]: I0116 21:21:33.305213 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c8c0e82-b18e-4cf2-bc74-ab0296b892f6-socket-dir\") pod \"csi-node-driver-4hncm\" (UID: \"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6\") " pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:33.308788 systemd[1]: Started cri-containerd-b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6.scope - libcontainer container b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6. Jan 16 21:21:33.323825 kubelet[2829]: E0116 21:21:33.323056 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.323825 kubelet[2829]: W0116 21:21:33.323276 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.323825 kubelet[2829]: E0116 21:21:33.323362 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.329499 kubelet[2829]: E0116 21:21:33.329476 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.329674 kubelet[2829]: W0116 21:21:33.329652 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.329820 kubelet[2829]: E0116 21:21:33.329801 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.337170 kubelet[2829]: E0116 21:21:33.335501 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.339911 kubelet[2829]: W0116 21:21:33.337352 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.341287 kubelet[2829]: E0116 21:21:33.340332 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.344441 kubelet[2829]: E0116 21:21:33.344321 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.344441 kubelet[2829]: W0116 21:21:33.344391 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.349277 kubelet[2829]: E0116 21:21:33.349226 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.349277 kubelet[2829]: W0116 21:21:33.349246 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.351305 kubelet[2829]: E0116 21:21:33.350793 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.351305 kubelet[2829]: E0116 21:21:33.350894 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.351559 kubelet[2829]: E0116 21:21:33.351433 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.351559 kubelet[2829]: W0116 21:21:33.351506 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.352224 kubelet[2829]: E0116 21:21:33.352044 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.353679 kubelet[2829]: E0116 21:21:33.353367 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.353679 kubelet[2829]: W0116 21:21:33.353473 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.354231 kubelet[2829]: E0116 21:21:33.354190 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.358452 kubelet[2829]: E0116 21:21:33.357985 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.358729 kubelet[2829]: W0116 21:21:33.358696 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.360872 kubelet[2829]: E0116 21:21:33.360230 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.360929 kubelet[2829]: E0116 21:21:33.360912 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.360962 kubelet[2829]: W0116 21:21:33.360927 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.361396 kubelet[2829]: E0116 21:21:33.361375 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.361805 kubelet[2829]: E0116 21:21:33.361769 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.361805 kubelet[2829]: W0116 21:21:33.361784 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.362380 kubelet[2829]: E0116 21:21:33.362200 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.366459 kubelet[2829]: E0116 21:21:33.366251 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.366650 kubelet[2829]: W0116 21:21:33.366559 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.366957 kubelet[2829]: E0116 21:21:33.366931 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.368331 kubelet[2829]: E0116 21:21:33.368293 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.368331 kubelet[2829]: W0116 21:21:33.368310 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.369717 kubelet[2829]: E0116 21:21:33.369676 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.369980 kubelet[2829]: E0116 21:21:33.369966 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.370068 kubelet[2829]: W0116 21:21:33.370051 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.370388 kubelet[2829]: E0116 21:21:33.370338 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.374813 kubelet[2829]: E0116 21:21:33.374792 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.374970 kubelet[2829]: W0116 21:21:33.374896 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.375241 kubelet[2829]: E0116 21:21:33.375054 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.375931 kubelet[2829]: E0116 21:21:33.375916 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.376039 kubelet[2829]: W0116 21:21:33.376022 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.376715 kubelet[2829]: E0116 21:21:33.376553 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.385927 kubelet[2829]: E0116 21:21:33.385837 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.385927 kubelet[2829]: W0116 21:21:33.385920 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.387338 kubelet[2829]: E0116 21:21:33.387312 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.389206 kubelet[2829]: E0116 21:21:33.388801 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.389481 kubelet[2829]: W0116 21:21:33.389228 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.390799 kubelet[2829]: E0116 21:21:33.390769 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.395375 kubelet[2829]: E0116 21:21:33.394958 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.396959 kubelet[2829]: W0116 21:21:33.395530 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.397534 kubelet[2829]: E0116 21:21:33.397442 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.401257 kubelet[2829]: E0116 21:21:33.400900 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.401257 kubelet[2829]: W0116 21:21:33.400918 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.401257 kubelet[2829]: E0116 21:21:33.400938 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.402230 kubelet[2829]: E0116 21:21:33.401803 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.402369 kubelet[2829]: W0116 21:21:33.402281 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.402418 kubelet[2829]: E0116 21:21:33.402369 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.403303 kubelet[2829]: E0116 21:21:33.403242 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.403355 kubelet[2829]: W0116 21:21:33.403259 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.403355 kubelet[2829]: E0116 21:21:33.403345 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.407191 kubelet[2829]: E0116 21:21:33.406731 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.407191 kubelet[2829]: W0116 21:21:33.406748 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.407191 kubelet[2829]: E0116 21:21:33.406768 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.408490 kubelet[2829]: E0116 21:21:33.408475 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.408566 kubelet[2829]: W0116 21:21:33.408553 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.409235 kubelet[2829]: E0116 21:21:33.409057 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.411376 kubelet[2829]: E0116 21:21:33.411263 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.411376 kubelet[2829]: W0116 21:21:33.411343 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.411376 kubelet[2829]: E0116 21:21:33.411367 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.412315 kubelet[2829]: E0116 21:21:33.411836 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.412315 kubelet[2829]: W0116 21:21:33.411909 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.412443 kubelet[2829]: E0116 21:21:33.412425 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.414002 kubelet[2829]: E0116 21:21:33.413917 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.414002 kubelet[2829]: W0116 21:21:33.413987 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.414206 kubelet[2829]: E0116 21:21:33.414060 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.415884 kubelet[2829]: E0116 21:21:33.415398 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.415884 kubelet[2829]: W0116 21:21:33.415411 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.415884 kubelet[2829]: E0116 21:21:33.415464 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.415992 kubelet[2829]: E0116 21:21:33.415893 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.415992 kubelet[2829]: W0116 21:21:33.415904 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.416385 kubelet[2829]: E0116 21:21:33.416284 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.416872 kubelet[2829]: E0116 21:21:33.416776 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.416872 kubelet[2829]: W0116 21:21:33.416851 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.417692 kubelet[2829]: E0116 21:21:33.417337 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.417988 kubelet[2829]: E0116 21:21:33.417951 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.417988 kubelet[2829]: W0116 21:21:33.417970 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.418456 kubelet[2829]: E0116 21:21:33.418429 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.419482 kubelet[2829]: E0116 21:21:33.419467 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.419556 kubelet[2829]: W0116 21:21:33.419541 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.420308 kubelet[2829]: E0116 21:21:33.420221 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.422178 kubelet[2829]: E0116 21:21:33.422020 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.423173 kubelet[2829]: W0116 21:21:33.423153 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.424540 kubelet[2829]: E0116 21:21:33.424064 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.426457 kubelet[2829]: E0116 21:21:33.426364 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.426457 kubelet[2829]: W0116 21:21:33.426383 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.426670 kubelet[2829]: E0116 21:21:33.426646 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.429733 kubelet[2829]: E0116 21:21:33.429384 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.429733 kubelet[2829]: W0116 21:21:33.429406 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.429733 kubelet[2829]: E0116 21:21:33.429638 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.430421 kubelet[2829]: E0116 21:21:33.430393 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.430421 kubelet[2829]: W0116 21:21:33.430405 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.430552 kubelet[2829]: E0116 21:21:33.430539 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.432762 kubelet[2829]: E0116 21:21:33.432747 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.432845 kubelet[2829]: W0116 21:21:33.432820 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.433429 kubelet[2829]: E0116 21:21:33.433318 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.433976 kubelet[2829]: E0116 21:21:33.433962 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.434044 kubelet[2829]: W0116 21:21:33.434033 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.434318 kubelet[2829]: E0116 21:21:33.434305 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.434933 kubelet[2829]: E0116 21:21:33.434905 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.434933 kubelet[2829]: W0116 21:21:33.434918 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.435536 kubelet[2829]: E0116 21:21:33.435475 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.435954 kubelet[2829]: E0116 21:21:33.435939 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.436027 kubelet[2829]: W0116 21:21:33.436014 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.436513 kubelet[2829]: E0116 21:21:33.436486 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.437548 kubelet[2829]: E0116 21:21:33.437382 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.437548 kubelet[2829]: W0116 21:21:33.437401 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.437548 kubelet[2829]: E0116 21:21:33.437543 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.438668 kubelet[2829]: E0116 21:21:33.438555 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.438668 kubelet[2829]: W0116 21:21:33.438643 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.439467 kubelet[2829]: E0116 21:21:33.439447 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.439856 kubelet[2829]: E0116 21:21:33.439817 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.439856 kubelet[2829]: W0116 21:21:33.439835 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.440691 kubelet[2829]: E0116 21:21:33.440534 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.440826 kubelet[2829]: E0116 21:21:33.440812 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.440894 kubelet[2829]: W0116 21:21:33.440879 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.440000 audit: BPF prog-id=151 op=LOAD Jan 16 21:21:33.441645 kubelet[2829]: E0116 21:21:33.441519 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.444000 audit: BPF prog-id=152 op=LOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=152 op=UNLOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=153 op=LOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=154 op=LOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=154 op=UNLOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=153 op=UNLOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.444000 audit: BPF prog-id=155 op=LOAD Jan 16 21:21:33.444000 audit[3270]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3259 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313966653738373161353230323136653961626265326464613763 Jan 16 21:21:33.455686 kubelet[2829]: E0116 21:21:33.446308 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.455686 kubelet[2829]: W0116 21:21:33.446328 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.455686 kubelet[2829]: E0116 21:21:33.446447 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.457025 kubelet[2829]: E0116 21:21:33.456419 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.457025 kubelet[2829]: W0116 21:21:33.456456 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.457025 kubelet[2829]: E0116 21:21:33.456552 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.461304 kubelet[2829]: E0116 21:21:33.461219 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.462447 kubelet[2829]: W0116 21:21:33.461947 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.462447 kubelet[2829]: E0116 21:21:33.461979 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.463697 kubelet[2829]: E0116 21:21:33.463477 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.464052 kubelet[2829]: W0116 21:21:33.463792 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.464052 kubelet[2829]: E0116 21:21:33.463819 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.486500 kubelet[2829]: E0116 21:21:33.486347 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 21:21:33.486500 kubelet[2829]: W0116 21:21:33.486427 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 21:21:33.486500 kubelet[2829]: E0116 21:21:33.486455 2829 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 21:21:33.563762 containerd[1596]: time="2026-01-16T21:21:33.563548074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c7d5b58d-nzvq5,Uid:65cfaeb6-56b1-4413-a2a3-7e48f39784a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6\"" Jan 16 21:21:33.580213 kubelet[2829]: E0116 21:21:33.579342 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:33.585831 containerd[1596]: time="2026-01-16T21:21:33.585417253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 21:21:33.689269 kubelet[2829]: E0116 21:21:33.688788 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:33.690994 containerd[1596]: time="2026-01-16T21:21:33.690945918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58bkq,Uid:6c88b5a1-7790-44a1-92fd-40c46ca67f7f,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:33.848180 containerd[1596]: time="2026-01-16T21:21:33.847066124Z" level=info msg="connecting to shim 5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471" address="unix:///run/containerd/s/8449c2d095ca966ebca69811b67ef23e438bb3cbc18448dc995cbd9e824afad7" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:21:33.848000 audit[3358]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:33.848000 audit[3358]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe84939080 a2=0 a3=7ffe8493906c items=0 ppid=2988 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:33.857000 audit[3358]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:33.857000 audit[3358]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe84939080 a2=0 a3=0 items=0 ppid=2988 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:33.944058 systemd[1]: Started cri-containerd-5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471.scope - libcontainer container 5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471. Jan 16 21:21:33.991000 audit: BPF prog-id=156 op=LOAD Jan 16 21:21:33.995000 audit: BPF prog-id=157 op=LOAD Jan 16 21:21:33.995000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.995000 audit: BPF prog-id=157 op=UNLOAD Jan 16 21:21:33.995000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.996000 audit: BPF prog-id=158 op=LOAD Jan 16 21:21:33.996000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.997000 audit: BPF prog-id=159 op=LOAD Jan 16 21:21:33.997000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.997000 audit: BPF prog-id=159 op=UNLOAD Jan 16 21:21:33.997000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.997000 audit: BPF prog-id=158 op=UNLOAD Jan 16 21:21:33.997000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:33.998000 audit: BPF prog-id=160 op=LOAD Jan 16 21:21:33.998000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3357 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:33.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323766653031323561323936373533376135396466303465316561 Jan 16 21:21:34.117343 containerd[1596]: time="2026-01-16T21:21:34.116887110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58bkq,Uid:6c88b5a1-7790-44a1-92fd-40c46ca67f7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\"" Jan 16 21:21:34.119637 kubelet[2829]: E0116 21:21:34.119447 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:34.442676 kubelet[2829]: E0116 21:21:34.442388 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:34.476791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692226461.mount: Deactivated successfully. Jan 16 21:21:36.401899 containerd[1596]: time="2026-01-16T21:21:36.401762590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:36.406532 containerd[1596]: time="2026-01-16T21:21:36.406419964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 16 21:21:36.415531 containerd[1596]: time="2026-01-16T21:21:36.412945861Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:36.416736 containerd[1596]: time="2026-01-16T21:21:36.416437947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:36.417858 containerd[1596]: time="2026-01-16T21:21:36.417069447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.831546867s" Jan 16 21:21:36.417858 containerd[1596]: time="2026-01-16T21:21:36.417250724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 16 21:21:36.422837 containerd[1596]: time="2026-01-16T21:21:36.422276162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 21:21:36.442718 kubelet[2829]: E0116 21:21:36.442492 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:36.453966 containerd[1596]: time="2026-01-16T21:21:36.453927995Z" level=info msg="CreateContainer within sandbox \"b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 21:21:36.479184 containerd[1596]: time="2026-01-16T21:21:36.478296168Z" level=info msg="Container 680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:36.507895 containerd[1596]: time="2026-01-16T21:21:36.505990227Z" level=info msg="CreateContainer within sandbox \"b419fe7871a520216e9abbe2dda7c812277cf2f6d45697e2e2c164fda90f2fd6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266\"" Jan 16 21:21:36.510150 containerd[1596]: time="2026-01-16T21:21:36.509743996Z" level=info msg="StartContainer for \"680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266\"" Jan 16 21:21:36.512557 containerd[1596]: time="2026-01-16T21:21:36.512530176Z" level=info msg="connecting to shim 680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266" address="unix:///run/containerd/s/4aa9c18cfd582cf0d821d9b14024ddf4a9da0cb665010e209f4398b1ec9e8190" protocol=ttrpc version=3 Jan 16 21:21:36.556867 systemd[1]: Started cri-containerd-680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266.scope - libcontainer container 680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266. Jan 16 21:21:36.596000 audit: BPF prog-id=161 op=LOAD Jan 16 21:21:36.605227 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 16 21:21:36.605329 kernel: audit: type=1334 audit(1768598496.596:550): prog-id=161 op=LOAD Jan 16 21:21:36.602000 audit: BPF prog-id=162 op=LOAD Jan 16 21:21:36.616276 kernel: audit: type=1334 audit(1768598496.602:551): prog-id=162 op=LOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.638499 kernel: audit: type=1300 audit(1768598496.602:551): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.664787 kernel: audit: type=1327 audit(1768598496.602:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.664908 kernel: audit: type=1334 audit(1768598496.602:552): prog-id=162 op=UNLOAD Jan 16 21:21:36.602000 audit: BPF prog-id=162 op=UNLOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.698213 kernel: audit: type=1300 audit(1768598496.602:552): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.602000 audit: BPF prog-id=163 op=LOAD Jan 16 21:21:36.728656 kernel: audit: type=1327 audit(1768598496.602:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.728749 kernel: audit: type=1334 audit(1768598496.602:553): prog-id=163 op=LOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.759424 kernel: audit: type=1300 audit(1768598496.602:553): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.785923 kernel: audit: type=1327 audit(1768598496.602:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.602000 audit: BPF prog-id=164 op=LOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.602000 audit: BPF prog-id=164 op=UNLOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.602000 audit: BPF prog-id=163 op=UNLOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.602000 audit: BPF prog-id=165 op=LOAD Jan 16 21:21:36.602000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3259 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:36.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638306463326366636432303465626666303538663937333464626238 Jan 16 21:21:36.805235 containerd[1596]: time="2026-01-16T21:21:36.804583954Z" level=info msg="StartContainer for \"680dc2cfcd204ebff058f9734dbb84d432a87dae10916c39e692ba060cda1266\" returns successfully" Jan 16 21:21:37.256284 containerd[1596]: time="2026-01-16T21:21:37.254403596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:37.258834 containerd[1596]: time="2026-01-16T21:21:37.257387755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 21:21:37.264832 containerd[1596]: time="2026-01-16T21:21:37.263505531Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:37.279786 containerd[1596]: time="2026-01-16T21:21:37.279737102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:37.283380 containerd[1596]: time="2026-01-16T21:21:37.283192460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 860.771817ms" Jan 16 21:21:37.283380 containerd[1596]: time="2026-01-16T21:21:37.283224600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 16 21:21:37.295929 containerd[1596]: time="2026-01-16T21:21:37.295775942Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 21:21:37.342874 containerd[1596]: time="2026-01-16T21:21:37.342548504Z" level=info msg="Container 84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:37.376238 containerd[1596]: time="2026-01-16T21:21:37.376025839Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7\"" Jan 16 21:21:37.383345 containerd[1596]: time="2026-01-16T21:21:37.383312547Z" level=info msg="StartContainer for \"84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7\"" Jan 16 21:21:37.387667 containerd[1596]: time="2026-01-16T21:21:37.387566095Z" level=info msg="connecting to shim 84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7" address="unix:///run/containerd/s/8449c2d095ca966ebca69811b67ef23e438bb3cbc18448dc995cbd9e824afad7" protocol=ttrpc version=3 Jan 16 21:21:37.473429 kubelet[2829]: E0116 21:21:37.473019 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:37.481480 systemd[1]: Started cri-containerd-84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7.scope - libcontainer container 84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7. Jan 16 21:21:37.634000 audit: BPF prog-id=166 op=LOAD Jan 16 21:21:37.634000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3357 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834666663623236653363613635633431373530626133363330383433 Jan 16 21:21:37.634000 audit: BPF prog-id=167 op=LOAD Jan 16 21:21:37.634000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3357 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834666663623236653363613635633431373530626133363330383433 Jan 16 21:21:37.634000 audit: BPF prog-id=167 op=UNLOAD Jan 16 21:21:37.634000 audit[3444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834666663623236653363613635633431373530626133363330383433 Jan 16 21:21:37.634000 audit: BPF prog-id=166 op=UNLOAD Jan 16 21:21:37.634000 audit[3444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834666663623236653363613635633431373530626133363330383433 Jan 16 21:21:37.634000 audit: BPF prog-id=168 op=LOAD Jan 16 21:21:37.634000 audit[3444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3357 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834666663623236653363613635633431373530626133363330383433 Jan 16 21:21:37.709777 containerd[1596]: time="2026-01-16T21:21:37.707577025Z" level=info msg="StartContainer for \"84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7\" returns successfully" Jan 16 21:21:37.731972 systemd[1]: cri-containerd-84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7.scope: Deactivated successfully. Jan 16 21:21:37.732720 systemd[1]: cri-containerd-84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7.scope: Consumed 94ms CPU time, 6.7M memory peak, 3.6M written to disk. Jan 16 21:21:37.736000 audit: BPF prog-id=168 op=UNLOAD Jan 16 21:21:37.738324 containerd[1596]: time="2026-01-16T21:21:37.735578131Z" level=info msg="received container exit event container_id:\"84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7\" id:\"84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7\" pid:3459 exited_at:{seconds:1768598497 nanos:734515615}" Jan 16 21:21:37.751483 kubelet[2829]: E0116 21:21:37.751402 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:37.767041 kubelet[2829]: E0116 21:21:37.767001 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:37.818565 kubelet[2829]: I0116 21:21:37.818400 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57c7d5b58d-nzvq5" podStartSLOduration=2.979128774 podStartE2EDuration="5.818380107s" podCreationTimestamp="2026-01-16 21:21:32 +0000 UTC" firstStartedPulling="2026-01-16 21:21:33.582320706 +0000 UTC m=+24.459351712" lastFinishedPulling="2026-01-16 21:21:36.421572039 +0000 UTC m=+27.298603045" observedRunningTime="2026-01-16 21:21:37.794574433 +0000 UTC m=+28.671605449" watchObservedRunningTime="2026-01-16 21:21:37.818380107 +0000 UTC m=+28.695411134" Jan 16 21:21:37.825957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84ffcb26e3ca65c41750ba3630843e5d31d784509d79bc9b6a32f6792fc357b7-rootfs.mount: Deactivated successfully. Jan 16 21:21:37.888000 audit[3499]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:37.888000 audit[3499]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf6bd5e60 a2=0 a3=7ffcf6bd5e4c items=0 ppid=2988 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:37.896000 audit[3499]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:21:37.896000 audit[3499]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf6bd5e60 a2=0 a3=7ffcf6bd5e4c items=0 ppid=2988 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:37.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:21:38.777290 kubelet[2829]: E0116 21:21:38.776806 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:38.777290 kubelet[2829]: E0116 21:21:38.776950 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:38.778287 containerd[1596]: time="2026-01-16T21:21:38.777992108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 21:21:39.448473 kubelet[2829]: E0116 21:21:39.446997 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:39.797892 kubelet[2829]: E0116 21:21:39.796570 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:41.443289 kubelet[2829]: E0116 21:21:41.442988 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:43.167726 containerd[1596]: time="2026-01-16T21:21:43.166396837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:43.172218 containerd[1596]: time="2026-01-16T21:21:43.171947205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Jan 16 21:21:43.175722 containerd[1596]: time="2026-01-16T21:21:43.175683888Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:43.179929 containerd[1596]: time="2026-01-16T21:21:43.179805767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:21:43.180692 containerd[1596]: time="2026-01-16T21:21:43.180500451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.402420249s" Jan 16 21:21:43.180692 containerd[1596]: time="2026-01-16T21:21:43.180539323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 16 21:21:43.185390 containerd[1596]: time="2026-01-16T21:21:43.185305662Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 21:21:43.222488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332691745.mount: Deactivated successfully. Jan 16 21:21:43.223518 containerd[1596]: time="2026-01-16T21:21:43.223475520Z" level=info msg="Container fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:21:43.255891 containerd[1596]: time="2026-01-16T21:21:43.255752511Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c\"" Jan 16 21:21:43.262463 containerd[1596]: time="2026-01-16T21:21:43.259054617Z" level=info msg="StartContainer for \"fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c\"" Jan 16 21:21:43.264510 containerd[1596]: time="2026-01-16T21:21:43.264473172Z" level=info msg="connecting to shim fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c" address="unix:///run/containerd/s/8449c2d095ca966ebca69811b67ef23e438bb3cbc18448dc995cbd9e824afad7" protocol=ttrpc version=3 Jan 16 21:21:43.324810 systemd[1]: Started cri-containerd-fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c.scope - libcontainer container fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c. Jan 16 21:21:43.418000 audit: BPF prog-id=169 op=LOAD Jan 16 21:21:43.427363 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 21:21:43.427468 kernel: audit: type=1334 audit(1768598503.418:566): prog-id=169 op=LOAD Jan 16 21:21:43.434488 kernel: audit: type=1300 audit(1768598503.418:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.418000 audit[3508]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.452013 kubelet[2829]: E0116 21:21:43.450363 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:43.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.508040 kernel: audit: type=1327 audit(1768598503.418:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.418000 audit: BPF prog-id=170 op=LOAD Jan 16 21:21:43.418000 audit[3508]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.553834 kernel: audit: type=1334 audit(1768598503.418:567): prog-id=170 op=LOAD Jan 16 21:21:43.554263 kernel: audit: type=1300 audit(1768598503.418:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.554312 kernel: audit: type=1327 audit(1768598503.418:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.418000 audit: BPF prog-id=170 op=UNLOAD Jan 16 21:21:43.593982 kernel: audit: type=1334 audit(1768598503.418:568): prog-id=170 op=UNLOAD Jan 16 21:21:43.594248 kernel: audit: type=1300 audit(1768598503.418:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.418000 audit[3508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.619583 containerd[1596]: time="2026-01-16T21:21:43.618457707Z" level=info msg="StartContainer for \"fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c\" returns successfully" Jan 16 21:21:43.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.681822 kernel: audit: type=1327 audit(1768598503.418:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.418000 audit: BPF prog-id=169 op=UNLOAD Jan 16 21:21:43.418000 audit[3508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.693430 kernel: audit: type=1334 audit(1768598503.418:569): prog-id=169 op=UNLOAD Jan 16 21:21:43.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.418000 audit: BPF prog-id=171 op=LOAD Jan 16 21:21:43.418000 audit[3508]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3357 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:21:43.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643737306133356439333832396633623539623330333732383135 Jan 16 21:21:43.821439 kubelet[2829]: E0116 21:21:43.821284 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:44.824777 kubelet[2829]: E0116 21:21:44.824368 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:45.404350 systemd[1]: cri-containerd-fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c.scope: Deactivated successfully. Jan 16 21:21:45.406372 systemd[1]: cri-containerd-fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c.scope: Consumed 1.533s CPU time, 171.2M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 16 21:21:45.410880 containerd[1596]: time="2026-01-16T21:21:45.410836639Z" level=info msg="received container exit event container_id:\"fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c\" id:\"fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c\" pid:3522 exited_at:{seconds:1768598505 nanos:409984775}" Jan 16 21:21:45.410000 audit: BPF prog-id=171 op=UNLOAD Jan 16 21:21:45.450655 kubelet[2829]: E0116 21:21:45.447313 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:45.528727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fed770a35d93829f3b59b30372815399ddd774f631a92d0c2b95abd29b61678c-rootfs.mount: Deactivated successfully. Jan 16 21:21:45.554330 kubelet[2829]: I0116 21:21:45.550937 2829 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 21:21:45.822425 kubelet[2829]: I0116 21:21:45.821511 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf888ed5-265d-4b90-8b8f-76579a07e031-calico-apiserver-certs\") pod \"calico-apiserver-6f68b6d698-x2ltk\" (UID: \"cf888ed5-265d-4b90-8b8f-76579a07e031\") " pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:21:45.822425 kubelet[2829]: I0116 21:21:45.821568 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ptz\" (UniqueName: \"kubernetes.io/projected/cf888ed5-265d-4b90-8b8f-76579a07e031-kube-api-access-77ptz\") pod \"calico-apiserver-6f68b6d698-x2ltk\" (UID: \"cf888ed5-265d-4b90-8b8f-76579a07e031\") " pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:21:45.822425 kubelet[2829]: I0116 21:21:45.821739 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqq29\" (UniqueName: \"kubernetes.io/projected/27a58ce5-0b24-4017-b5c5-f30f4c025ef8-kube-api-access-kqq29\") pod \"coredns-668d6bf9bc-6vb67\" (UID: \"27a58ce5-0b24-4017-b5c5-f30f4c025ef8\") " pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:21:45.822425 kubelet[2829]: I0116 21:21:45.821766 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkb8\" (UniqueName: \"kubernetes.io/projected/044f9539-8858-49e2-8876-e2c650ad8d77-kube-api-access-4zkb8\") pod \"goldmane-666569f655-j7hqz\" (UID: \"044f9539-8858-49e2-8876-e2c650ad8d77\") " pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:45.822425 kubelet[2829]: I0116 21:21:45.821791 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/044f9539-8858-49e2-8876-e2c650ad8d77-goldmane-key-pair\") pod \"goldmane-666569f655-j7hqz\" (UID: \"044f9539-8858-49e2-8876-e2c650ad8d77\") " pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:45.822918 kubelet[2829]: I0116 21:21:45.821821 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/044f9539-8858-49e2-8876-e2c650ad8d77-goldmane-ca-bundle\") pod \"goldmane-666569f655-j7hqz\" (UID: \"044f9539-8858-49e2-8876-e2c650ad8d77\") " pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:45.822918 kubelet[2829]: I0116 21:21:45.821853 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/044f9539-8858-49e2-8876-e2c650ad8d77-config\") pod \"goldmane-666569f655-j7hqz\" (UID: \"044f9539-8858-49e2-8876-e2c650ad8d77\") " pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:45.822918 kubelet[2829]: I0116 21:21:45.821878 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-ca-bundle\") pod \"whisker-65b458f79c-klwcj\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:21:45.822918 kubelet[2829]: I0116 21:21:45.821905 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-backend-key-pair\") pod \"whisker-65b458f79c-klwcj\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:21:45.822918 kubelet[2829]: I0116 21:21:45.821934 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/484b15e8-2e9e-4270-8a9c-899b52ca1f08-calico-apiserver-certs\") pod \"calico-apiserver-6f68b6d698-6gdmk\" (UID: \"484b15e8-2e9e-4270-8a9c-899b52ca1f08\") " pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:21:45.825269 kubelet[2829]: I0116 21:21:45.821963 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe95499a-0c2a-421c-aaa9-9ead2566d247-tigera-ca-bundle\") pod \"calico-kube-controllers-66dd98b47c-2sbfh\" (UID: \"fe95499a-0c2a-421c-aaa9-9ead2566d247\") " pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:21:45.825269 kubelet[2829]: I0116 21:21:45.821985 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890-config-volume\") pod \"coredns-668d6bf9bc-tzvp2\" (UID: \"35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890\") " pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:45.825269 kubelet[2829]: I0116 21:21:45.822011 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czw4z\" (UniqueName: \"kubernetes.io/projected/484b15e8-2e9e-4270-8a9c-899b52ca1f08-kube-api-access-czw4z\") pod \"calico-apiserver-6f68b6d698-6gdmk\" (UID: \"484b15e8-2e9e-4270-8a9c-899b52ca1f08\") " pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:21:45.825269 kubelet[2829]: I0116 21:21:45.822034 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27a58ce5-0b24-4017-b5c5-f30f4c025ef8-config-volume\") pod \"coredns-668d6bf9bc-6vb67\" (UID: \"27a58ce5-0b24-4017-b5c5-f30f4c025ef8\") " pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:21:45.825269 kubelet[2829]: I0116 21:21:45.822057 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rvq\" (UniqueName: \"kubernetes.io/projected/35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890-kube-api-access-27rvq\") pod \"coredns-668d6bf9bc-tzvp2\" (UID: \"35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890\") " pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:45.827358 systemd[1]: Created slice kubepods-besteffort-pod044f9539_8858_49e2_8876_e2c650ad8d77.slice - libcontainer container kubepods-besteffort-pod044f9539_8858_49e2_8876_e2c650ad8d77.slice. Jan 16 21:21:45.836009 kubelet[2829]: I0116 21:21:45.833294 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlhd\" (UniqueName: \"kubernetes.io/projected/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-kube-api-access-8qlhd\") pod \"whisker-65b458f79c-klwcj\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:21:45.836009 kubelet[2829]: I0116 21:21:45.833799 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7zx\" (UniqueName: \"kubernetes.io/projected/fe95499a-0c2a-421c-aaa9-9ead2566d247-kube-api-access-qw7zx\") pod \"calico-kube-controllers-66dd98b47c-2sbfh\" (UID: \"fe95499a-0c2a-421c-aaa9-9ead2566d247\") " pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:21:45.853991 systemd[1]: Created slice kubepods-burstable-pod27a58ce5_0b24_4017_b5c5_f30f4c025ef8.slice - libcontainer container kubepods-burstable-pod27a58ce5_0b24_4017_b5c5_f30f4c025ef8.slice. Jan 16 21:21:45.924426 systemd[1]: Created slice kubepods-besteffort-podfe95499a_0c2a_421c_aaa9_9ead2566d247.slice - libcontainer container kubepods-besteffort-podfe95499a_0c2a_421c_aaa9_9ead2566d247.slice. Jan 16 21:21:45.992693 kubelet[2829]: E0116 21:21:45.991926 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:46.001000 containerd[1596]: time="2026-01-16T21:21:46.000912820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 21:21:46.002654 systemd[1]: Created slice kubepods-besteffort-pod1bb2c8da_8b40_42bd_b0b7_c9e61aa8909d.slice - libcontainer container kubepods-besteffort-pod1bb2c8da_8b40_42bd_b0b7_c9e61aa8909d.slice. Jan 16 21:21:46.116391 systemd[1]: Created slice kubepods-besteffort-podcf888ed5_265d_4b90_8b8f_76579a07e031.slice - libcontainer container kubepods-besteffort-podcf888ed5_265d_4b90_8b8f_76579a07e031.slice. Jan 16 21:21:46.176432 systemd[1]: Created slice kubepods-burstable-pod35e6cf4c_2c1d_4d9f_ace9_c3378ebf9890.slice - libcontainer container kubepods-burstable-pod35e6cf4c_2c1d_4d9f_ace9_c3378ebf9890.slice. Jan 16 21:21:46.199856 kubelet[2829]: E0116 21:21:46.197949 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:46.207297 systemd[1]: Created slice kubepods-besteffort-pod484b15e8_2e9e_4270_8a9c_899b52ca1f08.slice - libcontainer container kubepods-besteffort-pod484b15e8_2e9e_4270_8a9c_899b52ca1f08.slice. Jan 16 21:21:46.213935 containerd[1596]: time="2026-01-16T21:21:46.213890685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:46.221496 containerd[1596]: time="2026-01-16T21:21:46.221194199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:46.253833 containerd[1596]: time="2026-01-16T21:21:46.253499726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:46.420737 containerd[1596]: time="2026-01-16T21:21:46.418860470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b458f79c-klwcj,Uid:1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:46.482170 containerd[1596]: time="2026-01-16T21:21:46.481430348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:46.507790 kubelet[2829]: E0116 21:21:46.507396 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:46.517808 containerd[1596]: time="2026-01-16T21:21:46.516653054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:46.517808 containerd[1596]: time="2026-01-16T21:21:46.516777344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:46.880410 containerd[1596]: time="2026-01-16T21:21:46.880284811Z" level=error msg="Failed to destroy network for sandbox \"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.888695 containerd[1596]: time="2026-01-16T21:21:46.888638329Z" level=error msg="Failed to destroy network for sandbox \"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.890897 systemd[1]: run-netns-cni\x2d3506af8a\x2d27c0\x2d4f24\x2ddfc8\x2dbad47c30f6f5.mount: Deactivated successfully. Jan 16 21:21:46.910972 containerd[1596]: time="2026-01-16T21:21:46.909862600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.911685 kubelet[2829]: E0116 21:21:46.910685 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.911685 kubelet[2829]: E0116 21:21:46.910823 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:21:46.911685 kubelet[2829]: E0116 21:21:46.910855 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:21:46.913431 kubelet[2829]: E0116 21:21:46.910913 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a676bc9fd6c09876a1553d918ed2cd8cc05bd50fd7bc1df0e1572776b0afdb95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:21:46.913677 containerd[1596]: time="2026-01-16T21:21:46.912356390Z" level=error msg="Failed to destroy network for sandbox \"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.913912 containerd[1596]: time="2026-01-16T21:21:46.913794618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.914521 kubelet[2829]: E0116 21:21:46.914318 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.914521 kubelet[2829]: E0116 21:21:46.914380 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:21:46.914521 kubelet[2829]: E0116 21:21:46.914406 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:21:46.914751 kubelet[2829]: E0116 21:21:46.914467 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61d42bcab499239bc0fd50e32219f05fd62831f56283c893beeb5c1b2d4a70c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:21:46.916906 containerd[1596]: time="2026-01-16T21:21:46.916776284Z" level=error msg="Failed to destroy network for sandbox \"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.936936 containerd[1596]: time="2026-01-16T21:21:46.936662299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b458f79c-klwcj,Uid:1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.940682 kubelet[2829]: E0116 21:21:46.940456 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.940682 kubelet[2829]: E0116 21:21:46.940529 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:21:46.940682 kubelet[2829]: E0116 21:21:46.940562 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:21:46.941877 kubelet[2829]: E0116 21:21:46.941813 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65b458f79c-klwcj_calico-system(1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65b458f79c-klwcj_calico-system(1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08784e57899abe6100761f48c609e267562e7a7e46698b72a232ed8786991c8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65b458f79c-klwcj" podUID="1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" Jan 16 21:21:46.971530 containerd[1596]: time="2026-01-16T21:21:46.969309429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.971908 kubelet[2829]: E0116 21:21:46.971307 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:46.971908 kubelet[2829]: E0116 21:21:46.971386 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:21:46.971908 kubelet[2829]: E0116 21:21:46.971415 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:21:46.972049 kubelet[2829]: E0116 21:21:46.971464 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6vb67_kube-system(27a58ce5-0b24-4017-b5c5-f30f4c025ef8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6vb67_kube-system(27a58ce5-0b24-4017-b5c5-f30f4c025ef8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34d8428b8cc2bafcef8af6493defc695a4f2bb546dc39f6ce2e0195da0023448\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6vb67" podUID="27a58ce5-0b24-4017-b5c5-f30f4c025ef8" Jan 16 21:21:47.000502 containerd[1596]: time="2026-01-16T21:21:46.998974672Z" level=error msg="Failed to destroy network for sandbox \"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.015851 containerd[1596]: time="2026-01-16T21:21:47.015797688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.016756 kubelet[2829]: E0116 21:21:47.016714 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.017241 kubelet[2829]: E0116 21:21:47.016985 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:47.017241 kubelet[2829]: E0116 21:21:47.017025 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:21:47.017241 kubelet[2829]: E0116 21:21:47.017072 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7abbd921caad0d7ebec5e2ade873e6d6876031d1cfcc60393b52baaf3419a32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:21:47.018047 containerd[1596]: time="2026-01-16T21:21:47.017958541Z" level=error msg="Failed to destroy network for sandbox \"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.027786 containerd[1596]: time="2026-01-16T21:21:47.027510576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.027786 containerd[1596]: time="2026-01-16T21:21:47.027510766Z" level=error msg="Failed to destroy network for sandbox \"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.028070 kubelet[2829]: E0116 21:21:47.027936 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.028070 kubelet[2829]: E0116 21:21:47.028017 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:47.028070 kubelet[2829]: E0116 21:21:47.028051 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:47.028337 kubelet[2829]: E0116 21:21:47.028214 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e04a05500e878bb2ac7d4a0bc626741ae8a36e080061141653c8254f999b7325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tzvp2" podUID="35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890" Jan 16 21:21:47.046285 containerd[1596]: time="2026-01-16T21:21:47.045932418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.046783 kubelet[2829]: E0116 21:21:47.046434 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.046783 kubelet[2829]: E0116 21:21:47.046497 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:21:47.046783 kubelet[2829]: E0116 21:21:47.046525 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:21:47.046941 kubelet[2829]: E0116 21:21:47.046571 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3437a782ce59d6274a76cab0678b9a5c54a1900a4d6b16ddde86d1c6729fe4ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:21:47.464308 systemd[1]: Created slice kubepods-besteffort-pod8c8c0e82_b18e_4cf2_bc74_ab0296b892f6.slice - libcontainer container kubepods-besteffort-pod8c8c0e82_b18e_4cf2_bc74_ab0296b892f6.slice. Jan 16 21:21:47.475256 containerd[1596]: time="2026-01-16T21:21:47.474391326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:47.531411 systemd[1]: run-netns-cni\x2d33a371dd\x2dd3c1\x2dd629\x2dce50\x2d6561646ee29e.mount: Deactivated successfully. Jan 16 21:21:47.531831 systemd[1]: run-netns-cni\x2d5bb93402\x2dbf56\x2d47a3\x2d8e78\x2df74e49e12abc.mount: Deactivated successfully. Jan 16 21:21:47.532040 systemd[1]: run-netns-cni\x2dac3a7dfc\x2dc6d0\x2d534f\x2dcbc4\x2d9c7c9aec00a5.mount: Deactivated successfully. Jan 16 21:21:47.532352 systemd[1]: run-netns-cni\x2df96b4459\x2daa6d\x2deb8d\x2d2bca\x2d4ad65783ff31.mount: Deactivated successfully. Jan 16 21:21:47.532547 systemd[1]: run-netns-cni\x2dee07344e\x2d9e95\x2dbb7a\x2d9975\x2d752161cb3ed0.mount: Deactivated successfully. Jan 16 21:21:47.532824 systemd[1]: run-netns-cni\x2dd6377a1c\x2d4949\x2d26b6\x2ded72\x2deed587a0eca4.mount: Deactivated successfully. Jan 16 21:21:47.700951 containerd[1596]: time="2026-01-16T21:21:47.698829729Z" level=error msg="Failed to destroy network for sandbox \"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.703570 systemd[1]: run-netns-cni\x2d53d23f21\x2ded08\x2debf4\x2d678c\x2daa0593c7e923.mount: Deactivated successfully. Jan 16 21:21:47.719229 containerd[1596]: time="2026-01-16T21:21:47.718556488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.720777 kubelet[2829]: E0116 21:21:47.720260 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:47.720777 kubelet[2829]: E0116 21:21:47.720331 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:47.720777 kubelet[2829]: E0116 21:21:47.720362 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hncm" Jan 16 21:21:47.720944 kubelet[2829]: E0116 21:21:47.720409 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a390038274f2e001744a5ed6fa7de963b97bae56e949788a6ffaad3cd0e9adc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:21:57.504293 kubelet[2829]: E0116 21:21:57.502791 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:57.509832 containerd[1596]: time="2026-01-16T21:21:57.509630573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,}" Jan 16 21:21:57.810301 containerd[1596]: time="2026-01-16T21:21:57.809284920Z" level=error msg="Failed to destroy network for sandbox \"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:57.816790 systemd[1]: run-netns-cni\x2d332e4377\x2d42fb\x2d41bb\x2dc9d7\x2d9e4419d645cf.mount: Deactivated successfully. Jan 16 21:21:57.821917 kubelet[2829]: E0116 21:21:57.819947 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:57.821917 kubelet[2829]: E0116 21:21:57.820020 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:57.821917 kubelet[2829]: E0116 21:21:57.820048 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:21:57.822213 containerd[1596]: time="2026-01-16T21:21:57.819247782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:21:57.822407 kubelet[2829]: E0116 21:21:57.820222 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a131397f67e3bcd1c4ec81fef199006aff6a20329efc338466ff352b9c2138db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tzvp2" podUID="35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890" Jan 16 21:21:59.447797 containerd[1596]: time="2026-01-16T21:21:59.447687679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b458f79c-klwcj,Uid:1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:59.473507 containerd[1596]: time="2026-01-16T21:21:59.473452598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:59.473760 containerd[1596]: time="2026-01-16T21:21:59.473653521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,}" Jan 16 21:21:59.482420 kubelet[2829]: E0116 21:21:59.478859 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:21:59.483646 containerd[1596]: time="2026-01-16T21:21:59.483510202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:21:59.483718 containerd[1596]: time="2026-01-16T21:21:59.483697880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,}" Jan 16 21:22:00.141283 containerd[1596]: time="2026-01-16T21:22:00.141031147Z" level=error msg="Failed to destroy network for sandbox \"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.149236 systemd[1]: run-netns-cni\x2d66b8dce4\x2d6133\x2dc9b2\x2d4662\x2d7245f6356c7a.mount: Deactivated successfully. Jan 16 21:22:00.244965 containerd[1596]: time="2026-01-16T21:22:00.244315066Z" level=error msg="Failed to destroy network for sandbox \"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.261285 containerd[1596]: time="2026-01-16T21:22:00.261231224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.267449 kubelet[2829]: E0116 21:22:00.267400 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.268366 kubelet[2829]: E0116 21:22:00.267744 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:22:00.268366 kubelet[2829]: E0116 21:22:00.267783 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6vb67" Jan 16 21:22:00.268366 kubelet[2829]: E0116 21:22:00.267840 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6vb67_kube-system(27a58ce5-0b24-4017-b5c5-f30f4c025ef8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6vb67_kube-system(27a58ce5-0b24-4017-b5c5-f30f4c025ef8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"812b82cd34c68cc3e7af91278caf7a234249d9739cd49df7934ca1fffd9f5f44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6vb67" podUID="27a58ce5-0b24-4017-b5c5-f30f4c025ef8" Jan 16 21:22:00.308776 containerd[1596]: time="2026-01-16T21:22:00.305268333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.313203 kubelet[2829]: E0116 21:22:00.309683 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.313203 kubelet[2829]: E0116 21:22:00.309755 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:22:00.313203 kubelet[2829]: E0116 21:22:00.309785 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" Jan 16 21:22:00.313402 kubelet[2829]: E0116 21:22:00.309830 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3830ceed3d077ea3eaafd8088ae4999f37fb3f90f35c65d2db3d8e5354fa4eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:00.390301 containerd[1596]: time="2026-01-16T21:22:00.389680187Z" level=error msg="Failed to destroy network for sandbox \"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.404402 containerd[1596]: time="2026-01-16T21:22:00.399351934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.405648 kubelet[2829]: E0116 21:22:00.405604 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.405739 kubelet[2829]: E0116 21:22:00.405673 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hncm" Jan 16 21:22:00.405739 kubelet[2829]: E0116 21:22:00.405708 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4hncm" Jan 16 21:22:00.405834 kubelet[2829]: E0116 21:22:00.405752 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d5076af852a23cc556f82a39700e6dd7cdd85b4bc123e35b17f5f2bfd6e250\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:00.417795 containerd[1596]: time="2026-01-16T21:22:00.417746961Z" level=error msg="Failed to destroy network for sandbox \"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.428463 containerd[1596]: time="2026-01-16T21:22:00.427851853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.431290 kubelet[2829]: E0116 21:22:00.430895 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.431290 kubelet[2829]: E0116 21:22:00.430954 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:22:00.431290 kubelet[2829]: E0116 21:22:00.430973 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j7hqz" Jan 16 21:22:00.441464 kubelet[2829]: E0116 21:22:00.431005 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c43800fd98332e38b08a48e2d73a4f39e0d17bb689cf1ada85f9166422c7f01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:00.483241 containerd[1596]: time="2026-01-16T21:22:00.480646238Z" level=error msg="Failed to destroy network for sandbox \"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.498053 containerd[1596]: time="2026-01-16T21:22:00.497929180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b458f79c-klwcj,Uid:1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.499273 kubelet[2829]: E0116 21:22:00.499023 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:00.499865 kubelet[2829]: E0116 21:22:00.499282 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:22:00.499865 kubelet[2829]: E0116 21:22:00.499312 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b458f79c-klwcj" Jan 16 21:22:00.499865 kubelet[2829]: E0116 21:22:00.499364 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65b458f79c-klwcj_calico-system(1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65b458f79c-klwcj_calico-system(1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62df5f4afcd9527721dc1550ae2667f0008e2196b8f2a019b4f0ab986253e09c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65b458f79c-klwcj" podUID="1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" Jan 16 21:22:00.553259 systemd[1]: run-netns-cni\x2de411f6f0\x2d7cf5\x2da0f0\x2dc620\x2d275f1b3bd18d.mount: Deactivated successfully. Jan 16 21:22:00.553448 systemd[1]: run-netns-cni\x2d36ab7bf1\x2dd8d0\x2d4c87\x2d92fc\x2d90d8c53902ae.mount: Deactivated successfully. Jan 16 21:22:00.554008 systemd[1]: run-netns-cni\x2df9e30d91\x2d68c2\x2da845\x2d38a9\x2d245be75fd734.mount: Deactivated successfully. Jan 16 21:22:00.554207 systemd[1]: run-netns-cni\x2d42e1da45\x2d19ba\x2d6fc8\x2dba35\x2d9bff754d8b89.mount: Deactivated successfully. Jan 16 21:22:01.455950 containerd[1596]: time="2026-01-16T21:22:01.455742206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:01.466680 containerd[1596]: time="2026-01-16T21:22:01.466511743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:22:01.843974 containerd[1596]: time="2026-01-16T21:22:01.843919019Z" level=error msg="Failed to destroy network for sandbox \"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.855297 containerd[1596]: time="2026-01-16T21:22:01.855249953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.858288 systemd[1]: run-netns-cni\x2d0542315e\x2d8972\x2d45d7\x2d61e5\x2d2e78b19e031d.mount: Deactivated successfully. Jan 16 21:22:01.864856 kubelet[2829]: E0116 21:22:01.858333 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.864856 kubelet[2829]: E0116 21:22:01.858415 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:22:01.864856 kubelet[2829]: E0116 21:22:01.858449 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:22:01.871210 kubelet[2829]: E0116 21:22:01.858501 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"010a2216e1dc3d187ed66ff51da1de17c51a93c1ec640c8ef5a840da0a6cae4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:01.906755 containerd[1596]: time="2026-01-16T21:22:01.903029035Z" level=error msg="Failed to destroy network for sandbox \"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.908875 systemd[1]: run-netns-cni\x2d9902d565\x2d1b26\x2dd32b\x2d80c7\x2de3f953dde577.mount: Deactivated successfully. Jan 16 21:22:01.926274 containerd[1596]: time="2026-01-16T21:22:01.919733664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.926482 kubelet[2829]: E0116 21:22:01.920372 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:01.926482 kubelet[2829]: E0116 21:22:01.920440 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:22:01.926482 kubelet[2829]: E0116 21:22:01.920468 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" Jan 16 21:22:01.926699 kubelet[2829]: E0116 21:22:01.920517 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0e20a4fd8d94810a8a0440484b8d48d3b18a8e1a5612878830d68139280f04c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:10.445308 kubelet[2829]: E0116 21:22:10.444657 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:10.447995 containerd[1596]: time="2026-01-16T21:22:10.445450587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,}" Jan 16 21:22:10.720971 containerd[1596]: time="2026-01-16T21:22:10.716866726Z" level=error msg="Failed to destroy network for sandbox \"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:10.726604 systemd[1]: run-netns-cni\x2dbb3cca7d\x2ddaeb\x2d838e\x2df21b\x2d21ca386df917.mount: Deactivated successfully. Jan 16 21:22:10.739317 containerd[1596]: time="2026-01-16T21:22:10.737451060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:10.740981 kubelet[2829]: E0116 21:22:10.740258 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:10.740981 kubelet[2829]: E0116 21:22:10.740331 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:22:10.740981 kubelet[2829]: E0116 21:22:10.740358 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tzvp2" Jan 16 21:22:10.741363 kubelet[2829]: E0116 21:22:10.740410 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tzvp2_kube-system(35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f46403add2ae11339a15150c9267d29bee1f51d5da3e11b70cd2832ad61a703b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tzvp2" podUID="35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890" Jan 16 21:22:11.127023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334711883.mount: Deactivated successfully. Jan 16 21:22:11.324352 containerd[1596]: time="2026-01-16T21:22:11.323944153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:22:11.327902 containerd[1596]: time="2026-01-16T21:22:11.327859546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 16 21:22:11.336594 containerd[1596]: time="2026-01-16T21:22:11.334490309Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:22:11.340266 containerd[1596]: time="2026-01-16T21:22:11.338938188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 21:22:11.358422 containerd[1596]: time="2026-01-16T21:22:11.356329368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 25.355324216s" Jan 16 21:22:11.358422 containerd[1596]: time="2026-01-16T21:22:11.356679951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 16 21:22:11.398253 containerd[1596]: time="2026-01-16T21:22:11.397360143Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 21:22:11.493511 containerd[1596]: time="2026-01-16T21:22:11.492635525Z" level=info msg="Container 6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:22:11.524157 containerd[1596]: time="2026-01-16T21:22:11.524037825Z" level=info msg="CreateContainer within sandbox \"5027fe0125a2967537a59df04e1ea95b11b1ad03781d431113cc5c0bcd814471\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb\"" Jan 16 21:22:11.529769 containerd[1596]: time="2026-01-16T21:22:11.529687906Z" level=info msg="StartContainer for \"6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb\"" Jan 16 21:22:11.536855 containerd[1596]: time="2026-01-16T21:22:11.533663875Z" level=info msg="connecting to shim 6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb" address="unix:///run/containerd/s/8449c2d095ca966ebca69811b67ef23e438bb3cbc18448dc995cbd9e824afad7" protocol=ttrpc version=3 Jan 16 21:22:11.692949 systemd[1]: Started cri-containerd-6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb.scope - libcontainer container 6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb. Jan 16 21:22:11.879000 audit: BPF prog-id=172 op=LOAD Jan 16 21:22:11.892857 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 16 21:22:11.892957 kernel: audit: type=1334 audit(1768598531.879:572): prog-id=172 op=LOAD Jan 16 21:22:11.909777 kernel: audit: type=1300 audit(1768598531.879:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.879000 audit[4128]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.955285 kernel: audit: type=1327 audit(1768598531.879:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:11.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:11.994290 kernel: audit: type=1334 audit(1768598531.881:573): prog-id=173 op=LOAD Jan 16 21:22:11.881000 audit: BPF prog-id=173 op=LOAD Jan 16 21:22:11.881000 audit[4128]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:12.025852 kernel: audit: type=1300 audit(1768598531.881:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:12.054976 kernel: audit: type=1327 audit(1768598531.881:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:12.055326 kernel: audit: type=1334 audit(1768598531.881:574): prog-id=173 op=UNLOAD Jan 16 21:22:11.881000 audit: BPF prog-id=173 op=UNLOAD Jan 16 21:22:12.076456 kernel: audit: type=1300 audit(1768598531.881:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.881000 audit[4128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:12.123878 kernel: audit: type=1327 audit(1768598531.881:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:12.124028 kernel: audit: type=1334 audit(1768598531.881:575): prog-id=172 op=UNLOAD Jan 16 21:22:11.881000 audit: BPF prog-id=172 op=UNLOAD Jan 16 21:22:11.881000 audit[4128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:11.881000 audit: BPF prog-id=174 op=LOAD Jan 16 21:22:11.881000 audit[4128]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3357 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:11.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663333634313637656134663532623938393834313561666439323536 Jan 16 21:22:12.188673 containerd[1596]: time="2026-01-16T21:22:12.188477132Z" level=info msg="StartContainer for \"6c364167ea4f52b9898415afd9256b7ee9a9ef2dda2c1a7f5e9adff22d29debb\" returns successfully" Jan 16 21:22:12.398017 kubelet[2829]: E0116 21:22:12.397255 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:12.453788 containerd[1596]: time="2026-01-16T21:22:12.451718066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:12.561647 kubelet[2829]: I0116 21:22:12.560485 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-58bkq" podStartSLOduration=2.319085511 podStartE2EDuration="39.560463591s" podCreationTimestamp="2026-01-16 21:21:33 +0000 UTC" firstStartedPulling="2026-01-16 21:21:34.121306647 +0000 UTC m=+24.998337652" lastFinishedPulling="2026-01-16 21:22:11.362684726 +0000 UTC m=+62.239715732" observedRunningTime="2026-01-16 21:22:12.541417843 +0000 UTC m=+63.418448849" watchObservedRunningTime="2026-01-16 21:22:12.560463591 +0000 UTC m=+63.437494597" Jan 16 21:22:12.828919 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 21:22:12.829020 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 21:22:12.834875 containerd[1596]: time="2026-01-16T21:22:12.834248167Z" level=error msg="Failed to destroy network for sandbox \"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:12.851859 systemd[1]: run-netns-cni\x2dc3ff94d5\x2dac85\x2d2ab7\x2deca2\x2de27859d2fff7.mount: Deactivated successfully. Jan 16 21:22:12.917384 containerd[1596]: time="2026-01-16T21:22:12.916035082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:12.917987 kubelet[2829]: E0116 21:22:12.916917 2829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 21:22:12.917987 kubelet[2829]: E0116 21:22:12.916978 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:22:12.917987 kubelet[2829]: E0116 21:22:12.917011 2829 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" Jan 16 21:22:12.918284 kubelet[2829]: E0116 21:22:12.917062 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d522aa5a79bbef7f984f2dc5ca62e25136f7f4d1552d9836ca19afc118937bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:13.400972 kubelet[2829]: E0116 21:22:13.400505 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:13.446341 containerd[1596]: time="2026-01-16T21:22:13.444475119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:22:13.450745 containerd[1596]: time="2026-01-16T21:22:13.450368408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:13.450745 containerd[1596]: time="2026-01-16T21:22:13.450477401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,}" Jan 16 21:22:13.836582 kubelet[2829]: I0116 21:22:13.834515 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-backend-key-pair\") pod \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " Jan 16 21:22:13.836582 kubelet[2829]: I0116 21:22:13.834705 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qlhd\" (UniqueName: \"kubernetes.io/projected/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-kube-api-access-8qlhd\") pod \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " Jan 16 21:22:13.836582 kubelet[2829]: I0116 21:22:13.834733 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-ca-bundle\") pod \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\" (UID: \"1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d\") " Jan 16 21:22:13.840464 kubelet[2829]: I0116 21:22:13.839725 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" (UID: "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 21:22:13.876485 kubelet[2829]: I0116 21:22:13.874907 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-kube-api-access-8qlhd" (OuterVolumeSpecName: "kube-api-access-8qlhd") pod "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" (UID: "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d"). InnerVolumeSpecName "kube-api-access-8qlhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 21:22:13.875425 systemd[1]: var-lib-kubelet-pods-1bb2c8da\x2d8b40\x2d42bd\x2db0b7\x2dc9e61aa8909d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qlhd.mount: Deactivated successfully. Jan 16 21:22:13.880842 systemd[1]: var-lib-kubelet-pods-1bb2c8da\x2d8b40\x2d42bd\x2db0b7\x2dc9e61aa8909d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 21:22:13.892055 kubelet[2829]: I0116 21:22:13.890650 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" (UID: "1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 21:22:13.936792 kubelet[2829]: I0116 21:22:13.936443 2829 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 16 21:22:13.936792 kubelet[2829]: I0116 21:22:13.936600 2829 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8qlhd\" (UniqueName: \"kubernetes.io/projected/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-kube-api-access-8qlhd\") on node \"localhost\" DevicePath \"\"" Jan 16 21:22:13.936792 kubelet[2829]: I0116 21:22:13.936617 2829 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 16 21:22:14.442246 systemd[1]: Removed slice kubepods-besteffort-pod1bb2c8da_8b40_42bd_b0b7_c9e61aa8909d.slice - libcontainer container kubepods-besteffort-pod1bb2c8da_8b40_42bd_b0b7_c9e61aa8909d.slice. Jan 16 21:22:14.447725 kubelet[2829]: E0116 21:22:14.447631 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:14.448859 containerd[1596]: time="2026-01-16T21:22:14.448432464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,}" Jan 16 21:22:14.464866 containerd[1596]: time="2026-01-16T21:22:14.453516008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:14.963869 systemd[1]: Created slice kubepods-besteffort-pod1ffcbae4_3231_47a7_b3a3_9a78e5206e0e.slice - libcontainer container kubepods-besteffort-pod1ffcbae4_3231_47a7_b3a3_9a78e5206e0e.slice. Jan 16 21:22:15.021210 kubelet[2829]: I0116 21:22:15.020921 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ffcbae4-3231-47a7-b3a3-9a78e5206e0e-whisker-ca-bundle\") pod \"whisker-65bbd7c669-5jcq4\" (UID: \"1ffcbae4-3231-47a7-b3a3-9a78e5206e0e\") " pod="calico-system/whisker-65bbd7c669-5jcq4" Jan 16 21:22:15.021210 kubelet[2829]: I0116 21:22:15.021061 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1ffcbae4-3231-47a7-b3a3-9a78e5206e0e-whisker-backend-key-pair\") pod \"whisker-65bbd7c669-5jcq4\" (UID: \"1ffcbae4-3231-47a7-b3a3-9a78e5206e0e\") " pod="calico-system/whisker-65bbd7c669-5jcq4" Jan 16 21:22:15.021416 kubelet[2829]: I0116 21:22:15.021220 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d6d\" (UniqueName: \"kubernetes.io/projected/1ffcbae4-3231-47a7-b3a3-9a78e5206e0e-kube-api-access-29d6d\") pod \"whisker-65bbd7c669-5jcq4\" (UID: \"1ffcbae4-3231-47a7-b3a3-9a78e5206e0e\") " pod="calico-system/whisker-65bbd7c669-5jcq4" Jan 16 21:22:15.402874 systemd-networkd[1513]: calib42e38e2672: Link UP Jan 16 21:22:15.412316 systemd-networkd[1513]: calib42e38e2672: Gained carrier Jan 16 21:22:15.484042 kubelet[2829]: I0116 21:22:15.483850 2829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d" path="/var/lib/kubelet/pods/1bb2c8da-8b40-42bd-b0b7-c9e61aa8909d/volumes" Jan 16 21:22:15.501867 containerd[1596]: 2026-01-16 21:22:13.780 [INFO][4261] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:15.501867 containerd[1596]: 2026-01-16 21:22:13.934 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0 calico-apiserver-6f68b6d698- calico-apiserver cf888ed5-265d-4b90-8b8f-76579a07e031 902 0 2026-01-16 21:21:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f68b6d698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f68b6d698-x2ltk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib42e38e2672 [] [] }} ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-" Jan 16 21:22:15.501867 containerd[1596]: 2026-01-16 21:22:13.934 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.501867 containerd[1596]: 2026-01-16 21:22:14.589 [INFO][4311] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" HandleID="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Workload="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.610 [INFO][4311] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" HandleID="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Workload="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004544e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f68b6d698-x2ltk", "timestamp":"2026-01-16 21:22:14.589336228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.610 [INFO][4311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.610 [INFO][4311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.640 [INFO][4311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.785 [INFO][4311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" host="localhost" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:14.847 [INFO][4311] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:15.004 [INFO][4311] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:15.077 [INFO][4311] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:15.098 [INFO][4311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:15.503784 containerd[1596]: 2026-01-16 21:22:15.100 [INFO][4311] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" host="localhost" Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.139 [INFO][4311] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.218 [INFO][4311] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" host="localhost" Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.266 [INFO][4311] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" host="localhost" Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.266 [INFO][4311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" host="localhost" Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.266 [INFO][4311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:15.505677 containerd[1596]: 2026-01-16 21:22:15.266 [INFO][4311] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" HandleID="k8s-pod-network.4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Workload="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.505930 containerd[1596]: 2026-01-16 21:22:15.305 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0", GenerateName:"calico-apiserver-6f68b6d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf888ed5-265d-4b90-8b8f-76579a07e031", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f68b6d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f68b6d698-x2ltk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42e38e2672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:15.507434 containerd[1596]: 2026-01-16 21:22:15.307 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.507434 containerd[1596]: 2026-01-16 21:22:15.307 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib42e38e2672 ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.507434 containerd[1596]: 2026-01-16 21:22:15.426 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.507620 containerd[1596]: 2026-01-16 21:22:15.427 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0", GenerateName:"calico-apiserver-6f68b6d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf888ed5-265d-4b90-8b8f-76579a07e031", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f68b6d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b", Pod:"calico-apiserver-6f68b6d698-x2ltk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42e38e2672", MAC:"5a:ec:54:02:88:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:15.507809 containerd[1596]: 2026-01-16 21:22:15.494 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-x2ltk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--x2ltk-eth0" Jan 16 21:22:15.577064 systemd-networkd[1513]: cali570cb61bf3f: Link UP Jan 16 21:22:15.577473 systemd-networkd[1513]: cali570cb61bf3f: Gained carrier Jan 16 21:22:15.601617 containerd[1596]: time="2026-01-16T21:22:15.601500121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bbd7c669-5jcq4,Uid:1ffcbae4-3231-47a7-b3a3-9a78e5206e0e,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:15.687278 containerd[1596]: 2026-01-16 21:22:13.812 [INFO][4234] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:15.687278 containerd[1596]: 2026-01-16 21:22:13.940 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0 calico-apiserver-6f68b6d698- calico-apiserver 484b15e8-2e9e-4270-8a9c-899b52ca1f08 911 0 2026-01-16 21:21:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f68b6d698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f68b6d698-6gdmk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali570cb61bf3f [] [] }} ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-" Jan 16 21:22:15.687278 containerd[1596]: 2026-01-16 21:22:13.940 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.687278 containerd[1596]: 2026-01-16 21:22:14.588 [INFO][4309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" HandleID="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Workload="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:14.620 [INFO][4309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" HandleID="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Workload="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f68b6d698-6gdmk", "timestamp":"2026-01-16 21:22:14.587997507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:14.620 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.269 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.279 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.308 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" host="localhost" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.341 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.396 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.418 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.436 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:15.688845 containerd[1596]: 2026-01-16 21:22:15.442 [INFO][4309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" host="localhost" Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.478 [INFO][4309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.500 [INFO][4309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" host="localhost" Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.533 [INFO][4309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" host="localhost" Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.533 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" host="localhost" Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.534 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:15.689390 containerd[1596]: 2026-01-16 21:22:15.535 [INFO][4309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" HandleID="k8s-pod-network.378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Workload="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.689770 containerd[1596]: 2026-01-16 21:22:15.558 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0", GenerateName:"calico-apiserver-6f68b6d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"484b15e8-2e9e-4270-8a9c-899b52ca1f08", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f68b6d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f68b6d698-6gdmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali570cb61bf3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:15.689941 containerd[1596]: 2026-01-16 21:22:15.558 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.689941 containerd[1596]: 2026-01-16 21:22:15.559 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali570cb61bf3f ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.689941 containerd[1596]: 2026-01-16 21:22:15.574 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.690042 containerd[1596]: 2026-01-16 21:22:15.580 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0", GenerateName:"calico-apiserver-6f68b6d698-", Namespace:"calico-apiserver", SelfLink:"", UID:"484b15e8-2e9e-4270-8a9c-899b52ca1f08", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f68b6d698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d", Pod:"calico-apiserver-6f68b6d698-6gdmk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali570cb61bf3f", MAC:"8a:fb:02:ed:9f:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:15.690281 containerd[1596]: 2026-01-16 21:22:15.670 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" Namespace="calico-apiserver" Pod="calico-apiserver-6f68b6d698-6gdmk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f68b6d698--6gdmk-eth0" Jan 16 21:22:15.904639 containerd[1596]: time="2026-01-16T21:22:15.903410814Z" level=info msg="connecting to shim 4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b" address="unix:///run/containerd/s/63b6b996495b24874d6d8e935294f34f5eaf9f469cea0a00973c64d97e567c51" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:16.018648 containerd[1596]: time="2026-01-16T21:22:16.017976791Z" level=info msg="connecting to shim 378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d" address="unix:///run/containerd/s/16a0f61b5c8df2c2aba6c8e511a4d68df12b5a5820bbf2297b4906353949b56d" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:16.036198 systemd-networkd[1513]: cali98800791c44: Link UP Jan 16 21:22:16.039822 systemd-networkd[1513]: cali98800791c44: Gained carrier Jan 16 21:22:16.207057 containerd[1596]: 2026-01-16 21:22:13.789 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:16.207057 containerd[1596]: 2026-01-16 21:22:13.934 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--j7hqz-eth0 goldmane-666569f655- calico-system 044f9539-8858-49e2-8876-e2c650ad8d77 891 0 2026-01-16 21:21:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-j7hqz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali98800791c44 [] [] }} ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-" Jan 16 21:22:16.207057 containerd[1596]: 2026-01-16 21:22:13.934 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.207057 containerd[1596]: 2026-01-16 21:22:14.557 [INFO][4307] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" HandleID="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Workload="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:14.632 [INFO][4307] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" HandleID="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Workload="localhost-k8s-goldmane--666569f655--j7hqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5c00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-j7hqz", "timestamp":"2026-01-16 21:22:14.557917152 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:14.632 [INFO][4307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.534 [INFO][4307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.534 [INFO][4307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.685 [INFO][4307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" host="localhost" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.731 [INFO][4307] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.817 [INFO][4307] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.836 [INFO][4307] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.854 [INFO][4307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:16.207676 containerd[1596]: 2026-01-16 21:22:15.854 [INFO][4307] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" host="localhost" Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.889 [INFO][4307] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980 Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.918 [INFO][4307] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" host="localhost" Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.963 [INFO][4307] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" host="localhost" Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.963 [INFO][4307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" host="localhost" Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.963 [INFO][4307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:16.208946 containerd[1596]: 2026-01-16 21:22:15.963 [INFO][4307] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" HandleID="k8s-pod-network.3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Workload="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.209503 containerd[1596]: 2026-01-16 21:22:15.981 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--j7hqz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"044f9539-8858-49e2-8876-e2c650ad8d77", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-j7hqz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98800791c44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:16.209503 containerd[1596]: 2026-01-16 21:22:15.981 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.209797 containerd[1596]: 2026-01-16 21:22:15.987 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98800791c44 ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.209797 containerd[1596]: 2026-01-16 21:22:16.096 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.209858 containerd[1596]: 2026-01-16 21:22:16.103 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--j7hqz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"044f9539-8858-49e2-8876-e2c650ad8d77", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980", Pod:"goldmane-666569f655-j7hqz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98800791c44", MAC:"0a:a0:22:2f:d9:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:16.210039 containerd[1596]: 2026-01-16 21:22:16.169 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" Namespace="calico-system" Pod="goldmane-666569f655-j7hqz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--j7hqz-eth0" Jan 16 21:22:16.390433 systemd[1]: Started cri-containerd-4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b.scope - libcontainer container 4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b. Jan 16 21:22:16.511934 systemd[1]: Started cri-containerd-378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d.scope - libcontainer container 378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d. Jan 16 21:22:16.587399 systemd-networkd[1513]: cali99d5ebd52fe: Link UP Jan 16 21:22:16.610729 systemd-networkd[1513]: cali99d5ebd52fe: Gained carrier Jan 16 21:22:16.735000 audit: BPF prog-id=175 op=LOAD Jan 16 21:22:16.740000 audit: BPF prog-id=176 op=LOAD Jan 16 21:22:16.740000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e238 a2=98 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.741000 audit: BPF prog-id=176 op=UNLOAD Jan 16 21:22:16.741000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.744000 audit: BPF prog-id=177 op=LOAD Jan 16 21:22:16.744000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e488 a2=98 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.752000 audit: BPF prog-id=178 op=LOAD Jan 16 21:22:16.752000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00021e218 a2=98 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.752000 audit: BPF prog-id=178 op=UNLOAD Jan 16 21:22:16.752000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.752000 audit: BPF prog-id=177 op=UNLOAD Jan 16 21:22:16.752000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.752000 audit: BPF prog-id=179 op=LOAD Jan 16 21:22:16.752000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021e6e8 a2=98 a3=0 items=0 ppid=4506 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337386562353334336339636133343466316139616232356637343936 Jan 16 21:22:16.758000 audit: BPF prog-id=180 op=LOAD Jan 16 21:22:16.764000 audit: BPF prog-id=181 op=LOAD Jan 16 21:22:16.764000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.764000 audit: BPF prog-id=181 op=UNLOAD Jan 16 21:22:16.764000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.766000 audit: BPF prog-id=182 op=LOAD Jan 16 21:22:16.766000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.766000 audit: BPF prog-id=183 op=LOAD Jan 16 21:22:16.766000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.766000 audit: BPF prog-id=183 op=UNLOAD Jan 16 21:22:16.766000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.766000 audit: BPF prog-id=182 op=UNLOAD Jan 16 21:22:16.766000 audit[4500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.766000 audit: BPF prog-id=184 op=LOAD Jan 16 21:22:16.766000 audit[4500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4463 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:16.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433343564316136313766356230373431646134363265633766303861 Jan 16 21:22:16.802241 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:16.836975 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:16.952223 containerd[1596]: 2026-01-16 21:22:15.049 [INFO][4342] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:16.952223 containerd[1596]: 2026-01-16 21:22:15.233 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4hncm-eth0 csi-node-driver- calico-system 8c8c0e82-b18e-4cf2-bc74-ab0296b892f6 771 0 2026-01-16 21:21:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4hncm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99d5ebd52fe [] [] }} ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-" Jan 16 21:22:16.952223 containerd[1596]: 2026-01-16 21:22:15.236 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.952223 containerd[1596]: 2026-01-16 21:22:15.424 [INFO][4383] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" HandleID="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Workload="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:15.424 [INFO][4383] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" HandleID="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Workload="localhost-k8s-csi--node--driver--4hncm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000217150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4hncm", "timestamp":"2026-01-16 21:22:15.424047743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:15.424 [INFO][4383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:15.974 [INFO][4383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:15.974 [INFO][4383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.045 [INFO][4383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" host="localhost" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.102 [INFO][4383] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.140 [INFO][4383] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.171 [INFO][4383] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.244 [INFO][4383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:16.953040 containerd[1596]: 2026-01-16 21:22:16.257 [INFO][4383] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" host="localhost" Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.292 [INFO][4383] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9 Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.391 [INFO][4383] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" host="localhost" Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.476 [INFO][4383] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" host="localhost" Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.477 [INFO][4383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" host="localhost" Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.480 [INFO][4383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:16.954248 containerd[1596]: 2026-01-16 21:22:16.483 [INFO][4383] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" HandleID="k8s-pod-network.9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Workload="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.954428 containerd[1596]: 2026-01-16 21:22:16.519 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4hncm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4hncm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99d5ebd52fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:16.954654 containerd[1596]: 2026-01-16 21:22:16.520 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.954654 containerd[1596]: 2026-01-16 21:22:16.520 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99d5ebd52fe ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.954654 containerd[1596]: 2026-01-16 21:22:16.706 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:16.954750 containerd[1596]: 2026-01-16 21:22:16.714 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4hncm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c8c0e82-b18e-4cf2-bc74-ab0296b892f6", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9", Pod:"csi-node-driver-4hncm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99d5ebd52fe", MAC:"52:53:99:31:4e:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:16.954924 containerd[1596]: 2026-01-16 21:22:16.932 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" Namespace="calico-system" Pod="csi-node-driver-4hncm" WorkloadEndpoint="localhost-k8s-csi--node--driver--4hncm-eth0" Jan 16 21:22:17.042039 containerd[1596]: time="2026-01-16T21:22:17.041888685Z" level=info msg="connecting to shim 3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980" address="unix:///run/containerd/s/e210a8fc4314c24c060d8c39f7f51b1f9b26828f15a7777a5ae5fc1afc8dbf04" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:17.097853 systemd-networkd[1513]: cali4491f4cbeaf: Link UP Jan 16 21:22:17.127752 systemd-networkd[1513]: cali4491f4cbeaf: Gained carrier Jan 16 21:22:17.255062 containerd[1596]: time="2026-01-16T21:22:17.253340286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-x2ltk,Uid:cf888ed5-265d-4b90-8b8f-76579a07e031,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4345d1a617f5b0741da462ec7f08a7a6ee049f80bf309955efe44a7c2192fa3b\"" Jan 16 21:22:17.289236 containerd[1596]: time="2026-01-16T21:22:17.289192127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:17.301365 containerd[1596]: 2026-01-16 21:22:14.893 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:17.301365 containerd[1596]: 2026-01-16 21:22:15.119 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6vb67-eth0 coredns-668d6bf9bc- kube-system 27a58ce5-0b24-4017-b5c5-f30f4c025ef8 896 0 2026-01-16 21:21:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6vb67 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4491f4cbeaf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-" Jan 16 21:22:17.301365 containerd[1596]: 2026-01-16 21:22:15.119 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.301365 containerd[1596]: 2026-01-16 21:22:15.448 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" HandleID="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Workload="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:15.450 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" HandleID="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Workload="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6vb67", "timestamp":"2026-01-16 21:22:15.448807118 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:15.450 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.487 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.490 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.604 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" host="localhost" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.690 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.807 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.838 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.856 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:17.301798 containerd[1596]: 2026-01-16 21:22:16.858 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" host="localhost" Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:16.909 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1 Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:16.941 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" host="localhost" Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:17.019 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" host="localhost" Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:17.024 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" host="localhost" Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:17.024 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:17.303417 containerd[1596]: 2026-01-16 21:22:17.024 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" HandleID="k8s-pod-network.0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Workload="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.303677 containerd[1596]: 2026-01-16 21:22:17.079 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6vb67-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"27a58ce5-0b24-4017-b5c5-f30f4c025ef8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6vb67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4491f4cbeaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:17.303885 containerd[1596]: 2026-01-16 21:22:17.079 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.303885 containerd[1596]: 2026-01-16 21:22:17.079 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4491f4cbeaf ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.303885 containerd[1596]: 2026-01-16 21:22:17.164 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.304034 containerd[1596]: 2026-01-16 21:22:17.184 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6vb67-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"27a58ce5-0b24-4017-b5c5-f30f4c025ef8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1", Pod:"coredns-668d6bf9bc-6vb67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4491f4cbeaf", MAC:"ca:be:12:07:39:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:17.304034 containerd[1596]: 2026-01-16 21:22:17.235 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-6vb67" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6vb67-eth0" Jan 16 21:22:17.338585 containerd[1596]: time="2026-01-16T21:22:17.338235592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f68b6d698-6gdmk,Uid:484b15e8-2e9e-4270-8a9c-899b52ca1f08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"378eb5343c9ca344f1a9ab25f7496f21127eccd24ada78454e8476cd806cc54d\"" Jan 16 21:22:17.373408 systemd-networkd[1513]: calib42e38e2672: Gained IPv6LL Jan 16 21:22:17.472358 containerd[1596]: time="2026-01-16T21:22:17.472288014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:17.479765 containerd[1596]: time="2026-01-16T21:22:17.479335546Z" level=info msg="connecting to shim 9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9" address="unix:///run/containerd/s/c8cc98321d936f354975f30675b4ae6e2b2c11ed0f18c682cd511fe8d109c528" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:17.500308 systemd-networkd[1513]: cali98800791c44: Gained IPv6LL Jan 16 21:22:17.539422 systemd[1]: Started cri-containerd-3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980.scope - libcontainer container 3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980. Jan 16 21:22:17.557950 containerd[1596]: time="2026-01-16T21:22:17.557012144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:17.568901 systemd-networkd[1513]: calia822ec26e90: Link UP Jan 16 21:22:17.582688 containerd[1596]: time="2026-01-16T21:22:17.582263596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:17.583485 kubelet[2829]: E0116 21:22:17.583444 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:17.586822 kubelet[2829]: E0116 21:22:17.585480 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:17.604258 kubelet[2829]: E0116 21:22:17.601958 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77ptz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:17.608367 containerd[1596]: time="2026-01-16T21:22:17.607405355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:17.636982 systemd-networkd[1513]: calia822ec26e90: Gained carrier Jan 16 21:22:17.643654 systemd-networkd[1513]: cali570cb61bf3f: Gained IPv6LL Jan 16 21:22:17.653843 kubelet[2829]: E0116 21:22:17.653328 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:15.976 [INFO][4414] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:16.149 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--65bbd7c669--5jcq4-eth0 whisker-65bbd7c669- calico-system 1ffcbae4-3231-47a7-b3a3-9a78e5206e0e 1027 0 2026-01-16 21:22:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65bbd7c669 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-65bbd7c669-5jcq4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia822ec26e90 [] [] }} ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:16.158 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:16.636 [INFO][4561] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" HandleID="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Workload="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:16.643 [INFO][4561] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" HandleID="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Workload="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-65bbd7c669-5jcq4", "timestamp":"2026-01-16 21:22:16.636432346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:16.644 [INFO][4561] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.028 [INFO][4561] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.028 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.121 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.196 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.270 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.282 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.308 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.310 [INFO][4561] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.332 [INFO][4561] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239 Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.398 [INFO][4561] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.483 [INFO][4561] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.484 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" host="localhost" Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.484 [INFO][4561] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:17.810391 containerd[1596]: 2026-01-16 21:22:17.484 [INFO][4561] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" HandleID="k8s-pod-network.4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Workload="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.525 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65bbd7c669--5jcq4-eth0", GenerateName:"whisker-65bbd7c669-", Namespace:"calico-system", SelfLink:"", UID:"1ffcbae4-3231-47a7-b3a3-9a78e5206e0e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bbd7c669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-65bbd7c669-5jcq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia822ec26e90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.525 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.526 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia822ec26e90 ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.631 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.638 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65bbd7c669--5jcq4-eth0", GenerateName:"whisker-65bbd7c669-", Namespace:"calico-system", SelfLink:"", UID:"1ffcbae4-3231-47a7-b3a3-9a78e5206e0e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 22, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bbd7c669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239", Pod:"whisker-65bbd7c669-5jcq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia822ec26e90", MAC:"42:ea:b3:8a:d7:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:17.817928 containerd[1596]: 2026-01-16 21:22:17.711 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" Namespace="calico-system" Pod="whisker-65bbd7c669-5jcq4" WorkloadEndpoint="localhost-k8s-whisker--65bbd7c669--5jcq4-eth0" Jan 16 21:22:17.854202 containerd[1596]: time="2026-01-16T21:22:17.848947043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:17.868469 containerd[1596]: time="2026-01-16T21:22:17.868419922Z" level=info msg="connecting to shim 0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1" address="unix:///run/containerd/s/2b2c6d3895af517caa6d56c978fa8da3c7200a9f1fa860a75ebafb48f7e29d44" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:17.872435 containerd[1596]: time="2026-01-16T21:22:17.872326089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:17.874027 kubelet[2829]: E0116 21:22:17.873030 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:17.874027 kubelet[2829]: E0116 21:22:17.873208 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:17.874027 kubelet[2829]: E0116 21:22:17.873352 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czw4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:17.874672 containerd[1596]: time="2026-01-16T21:22:17.873051380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:17.875062 kubelet[2829]: E0116 21:22:17.874981 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:17.948452 systemd[1]: Started cri-containerd-9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9.scope - libcontainer container 9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9. Jan 16 21:22:17.985238 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 16 21:22:17.985368 kernel: audit: type=1334 audit(1768598537.967:593): prog-id=185 op=LOAD Jan 16 21:22:17.967000 audit: BPF prog-id=185 op=LOAD Jan 16 21:22:18.018612 kernel: audit: type=1334 audit(1768598537.997:594): prog-id=186 op=LOAD Jan 16 21:22:17.997000 audit: BPF prog-id=186 op=LOAD Jan 16 21:22:18.014738 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.064721 kernel: audit: type=1300 audit(1768598537.997:594): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.067715 systemd[1]: Started cri-containerd-0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1.scope - libcontainer container 0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1. Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:18.133377 containerd[1596]: time="2026-01-16T21:22:18.086268926Z" level=info msg="connecting to shim 4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239" address="unix:///run/containerd/s/f3983a739a506b23a11842e243dd44cc11692267eb116e9eadbac51e7e5c0e05" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:17.997000 audit: BPF prog-id=186 op=UNLOAD Jan 16 21:22:18.146039 kernel: audit: type=1327 audit(1768598537.997:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:18.146240 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 16 21:22:18.146278 kernel: audit: type=1334 audit(1768598537.997:595): prog-id=186 op=UNLOAD Jan 16 21:22:18.146317 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Jan 16 21:22:18.146343 kernel: audit: backlog limit exceeded Jan 16 21:22:18.211412 kernel: audit: type=1300 audit(1768598537.997:595): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:18.257366 kernel: audit: type=1327 audit(1768598537.997:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:17.997000 audit: BPF prog-id=187 op=LOAD Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:17.997000 audit: BPF prog-id=188 op=LOAD Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:17.997000 audit: BPF prog-id=188 op=UNLOAD Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:17.997000 audit: BPF prog-id=187 op=UNLOAD Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:17.997000 audit: BPF prog-id=189 op=LOAD Jan 16 21:22:17.997000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4642 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:17.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334303066656433353030336337386136303162303138646462363935 Jan 16 21:22:18.026000 audit: BPF prog-id=190 op=LOAD Jan 16 21:22:18.026000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd6ba7bb80 a2=98 a3=1fffffffffffffff items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.026000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.026000 audit: BPF prog-id=190 op=UNLOAD Jan 16 21:22:18.026000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd6ba7bb50 a3=0 items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.026000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.028000 audit: BPF prog-id=191 op=LOAD Jan 16 21:22:18.028000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd6ba7ba60 a2=94 a3=3 items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.028000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.028000 audit: BPF prog-id=191 op=UNLOAD Jan 16 21:22:18.028000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd6ba7ba60 a2=94 a3=3 items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.028000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.028000 audit: BPF prog-id=192 op=LOAD Jan 16 21:22:18.028000 audit[4791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd6ba7baa0 a2=94 a3=7ffd6ba7bc80 items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.028000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.028000 audit: BPF prog-id=192 op=UNLOAD Jan 16 21:22:18.028000 audit[4791]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd6ba7baa0 a2=94 a3=7ffd6ba7bc80 items=0 ppid=4441 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.028000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 21:22:18.082000 audit: BPF prog-id=193 op=LOAD Jan 16 21:22:18.082000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd147b3750 a2=98 a3=3 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.082000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:18.083000 audit: BPF prog-id=193 op=UNLOAD Jan 16 21:22:18.083000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd147b3720 a3=0 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:18.090000 audit: BPF prog-id=194 op=LOAD Jan 16 21:22:18.090000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd147b3540 a2=94 a3=54428f items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.090000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:18.090000 audit: BPF prog-id=194 op=UNLOAD Jan 16 21:22:18.090000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd147b3540 a2=94 a3=54428f items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.090000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:18.151000 audit: BPF prog-id=196 op=LOAD Jan 16 21:22:18.165000 audit: BPF prog-id=197 op=LOAD Jan 16 21:22:18.268000 audit: BPF prog-id=195 op=UNLOAD Jan 16 21:22:18.268000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd147b3570 a2=0 a3=2 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.268000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:18.165000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.277000 audit: BPF prog-id=197 op=UNLOAD Jan 16 21:22:18.277000 audit[4712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.281000 audit: BPF prog-id=198 op=LOAD Jan 16 21:22:18.281000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.281000 audit: BPF prog-id=199 op=LOAD Jan 16 21:22:18.281000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.281000 audit: BPF prog-id=199 op=UNLOAD Jan 16 21:22:18.281000 audit[4712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.281000 audit: BPF prog-id=198 op=UNLOAD Jan 16 21:22:18.281000 audit[4712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.281000 audit: BPF prog-id=200 op=LOAD Jan 16 21:22:18.281000 audit[4712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4697 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964316130623564633237386237363865363638363466396366316162 Jan 16 21:22:18.304000 audit: BPF prog-id=201 op=LOAD Jan 16 21:22:18.357000 audit: BPF prog-id=202 op=LOAD Jan 16 21:22:18.357000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.371000 audit: BPF prog-id=202 op=UNLOAD Jan 16 21:22:18.371000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.372000 audit: BPF prog-id=203 op=LOAD Jan 16 21:22:18.372000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.372000 audit: BPF prog-id=204 op=LOAD Jan 16 21:22:18.372000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.372000 audit: BPF prog-id=204 op=UNLOAD Jan 16 21:22:18.372000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.372000 audit: BPF prog-id=203 op=UNLOAD Jan 16 21:22:18.372000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.373000 audit: BPF prog-id=205 op=LOAD Jan 16 21:22:18.373000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4742 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066373565613265343332343239663635373934373864623466633662 Jan 16 21:22:18.359513 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:18.487341 systemd-networkd[1513]: cali4491f4cbeaf: Gained IPv6LL Jan 16 21:22:18.524447 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:18.530409 systemd[1]: Started cri-containerd-4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239.scope - libcontainer container 4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239. Jan 16 21:22:18.622870 containerd[1596]: time="2026-01-16T21:22:18.622620889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j7hqz,Uid:044f9539-8858-49e2-8876-e2c650ad8d77,Namespace:calico-system,Attempt:0,} returns sandbox id \"3400fed35003c78a601b018ddb6956c1c9966282c0286a576ead4d69209f7980\"" Jan 16 21:22:18.654027 systemd-networkd[1513]: cali99d5ebd52fe: Gained IPv6LL Jan 16 21:22:18.659440 containerd[1596]: time="2026-01-16T21:22:18.659406715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:22:18.675736 kubelet[2829]: E0116 21:22:18.675700 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:18.677218 kubelet[2829]: E0116 21:22:18.676690 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:18.694000 audit: BPF prog-id=206 op=LOAD Jan 16 21:22:18.700000 audit: BPF prog-id=207 op=LOAD Jan 16 21:22:18.700000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000246238 a2=98 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.702000 audit: BPF prog-id=207 op=UNLOAD Jan 16 21:22:18.702000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.729000 audit: BPF prog-id=208 op=LOAD Jan 16 21:22:18.729000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000246488 a2=98 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.729000 audit: BPF prog-id=209 op=LOAD Jan 16 21:22:18.729000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000246218 a2=98 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.729000 audit: BPF prog-id=209 op=UNLOAD Jan 16 21:22:18.729000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.729000 audit: BPF prog-id=208 op=UNLOAD Jan 16 21:22:18.729000 audit[4828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.729000 audit: BPF prog-id=210 op=LOAD Jan 16 21:22:18.729000 audit[4828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002466e8 a2=98 a3=0 items=0 ppid=4805 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:18.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464663262386632356438646266363938393739613831656538626531 Jan 16 21:22:18.926222 containerd[1596]: time="2026-01-16T21:22:18.922662594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:18.928742 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:18.971736 containerd[1596]: time="2026-01-16T21:22:18.970957021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6vb67,Uid:27a58ce5-0b24-4017-b5c5-f30f4c025ef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1\"" Jan 16 21:22:18.979726 kubelet[2829]: E0116 21:22:18.979328 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:18.988376 containerd[1596]: time="2026-01-16T21:22:18.988318343Z" level=info msg="CreateContainer within sandbox \"0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:22:18.998788 containerd[1596]: time="2026-01-16T21:22:18.998597915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:22:18.998788 containerd[1596]: time="2026-01-16T21:22:18.998688815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:18.998957 kubelet[2829]: E0116 21:22:18.998890 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:18.998957 kubelet[2829]: E0116 21:22:18.998942 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:19.008277 kubelet[2829]: E0116 21:22:19.007901 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zkb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:19.010262 containerd[1596]: time="2026-01-16T21:22:19.009752941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4hncm,Uid:8c8c0e82-b18e-4cf2-bc74-ab0296b892f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d1a0b5dc278b768e66864f9cf1ab89703b95dd8b47567e0a2ff61e21ccec5e9\"" Jan 16 21:22:19.010411 kubelet[2829]: E0116 21:22:19.009939 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:19.022906 containerd[1596]: time="2026-01-16T21:22:19.020292702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:22:19.050000 audit[4869]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:19.050000 audit[4869]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7ee78cb0 a2=0 a3=7ffd7ee78c9c items=0 ppid=2988 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:19.074000 audit[4869]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:19.074000 audit[4869]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd7ee78cb0 a2=0 a3=0 items=0 ppid=2988 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:19.102966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599540840.mount: Deactivated successfully. Jan 16 21:22:19.126307 containerd[1596]: time="2026-01-16T21:22:19.125321496Z" level=info msg="Container 473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:22:19.128507 containerd[1596]: time="2026-01-16T21:22:19.127874301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:19.145288 containerd[1596]: time="2026-01-16T21:22:19.135947777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:22:19.147181 containerd[1596]: time="2026-01-16T21:22:19.146907769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:19.151218 kubelet[2829]: E0116 21:22:19.151052 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:19.152805 kubelet[2829]: E0116 21:22:19.151439 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:19.152805 kubelet[2829]: E0116 21:22:19.151656 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:19.172692 containerd[1596]: time="2026-01-16T21:22:19.172652464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:22:19.183000 audit[4871]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:19.183000 audit[4871]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6c1da9c0 a2=0 a3=7ffc6c1da9ac items=0 ppid=2988 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:19.202378 containerd[1596]: time="2026-01-16T21:22:19.202277948Z" level=info msg="CreateContainer within sandbox \"0f75ea2e432429f6579478db4fc6bdd29ee69fcd618841c136bdb7a9517c5dd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7\"" Jan 16 21:22:19.210176 containerd[1596]: time="2026-01-16T21:22:19.205471807Z" level=info msg="StartContainer for \"473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7\"" Jan 16 21:22:19.210176 containerd[1596]: time="2026-01-16T21:22:19.206969957Z" level=info msg="connecting to shim 473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7" address="unix:///run/containerd/s/2b2c6d3895af517caa6d56c978fa8da3c7200a9f1fa860a75ebafb48f7e29d44" protocol=ttrpc version=3 Jan 16 21:22:19.217000 audit[4871]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:19.217000 audit[4871]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc6c1da9c0 a2=0 a3=0 items=0 ppid=2988 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:19.226935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48814850.mount: Deactivated successfully. Jan 16 21:22:19.302609 containerd[1596]: time="2026-01-16T21:22:19.301662170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:19.309269 containerd[1596]: time="2026-01-16T21:22:19.309222502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:22:19.310978 containerd[1596]: time="2026-01-16T21:22:19.309647312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:19.333322 kubelet[2829]: E0116 21:22:19.331327 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:19.333322 kubelet[2829]: E0116 21:22:19.331388 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:19.349505 kubelet[2829]: E0116 21:22:19.331515 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:19.350888 kubelet[2829]: E0116 21:22:19.350786 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:19.412845 systemd[1]: Started cri-containerd-473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7.scope - libcontainer container 473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7. Jan 16 21:22:19.507034 containerd[1596]: time="2026-01-16T21:22:19.506765164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bbd7c669-5jcq4,Uid:1ffcbae4-3231-47a7-b3a3-9a78e5206e0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4df2b8f25d8dbf698979a81ee8be1e4465e907ddd45dfc14f3f15d6ee17aa239\"" Jan 16 21:22:19.532307 containerd[1596]: time="2026-01-16T21:22:19.527689595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:22:19.596000 audit: BPF prog-id=211 op=LOAD Jan 16 21:22:19.599000 audit: BPF prog-id=212 op=LOAD Jan 16 21:22:19.599000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.599000 audit: BPF prog-id=212 op=UNLOAD Jan 16 21:22:19.599000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.599000 audit: BPF prog-id=213 op=LOAD Jan 16 21:22:19.599000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.600000 audit: BPF prog-id=214 op=LOAD Jan 16 21:22:19.600000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.600000 audit: BPF prog-id=214 op=UNLOAD Jan 16 21:22:19.600000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.600000 audit: BPF prog-id=213 op=UNLOAD Jan 16 21:22:19.600000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.601000 audit: BPF prog-id=215 op=LOAD Jan 16 21:22:19.601000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4742 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437336465623534383437623963343838663133343032323335313030 Jan 16 21:22:19.610946 systemd-networkd[1513]: calia822ec26e90: Gained IPv6LL Jan 16 21:22:19.645000 audit: BPF prog-id=216 op=LOAD Jan 16 21:22:19.645000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd147b3430 a2=94 a3=1 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.645000 audit: BPF prog-id=216 op=UNLOAD Jan 16 21:22:19.645000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd147b3430 a2=94 a3=1 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.654385 containerd[1596]: time="2026-01-16T21:22:19.653295141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:19.676242 containerd[1596]: time="2026-01-16T21:22:19.676005818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:22:19.676242 containerd[1596]: time="2026-01-16T21:22:19.676213895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:19.677362 kubelet[2829]: E0116 21:22:19.677207 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:19.677362 kubelet[2829]: E0116 21:22:19.677305 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:19.682841 kubelet[2829]: E0116 21:22:19.680259 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4a36d8bfc8d44428963d068adc3adb01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:19.692000 audit: BPF prog-id=217 op=LOAD Jan 16 21:22:19.692000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd147b3420 a2=94 a3=4 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.693000 audit: BPF prog-id=217 op=UNLOAD Jan 16 21:22:19.693000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd147b3420 a2=0 a3=4 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.693000 audit: BPF prog-id=218 op=LOAD Jan 16 21:22:19.693000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd147b3280 a2=94 a3=5 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.693000 audit: BPF prog-id=218 op=UNLOAD Jan 16 21:22:19.693000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd147b3280 a2=0 a3=5 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.693000 audit: BPF prog-id=219 op=LOAD Jan 16 21:22:19.693000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd147b34a0 a2=94 a3=6 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.693000 audit: BPF prog-id=219 op=UNLOAD Jan 16 21:22:19.693000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd147b34a0 a2=0 a3=6 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.694000 audit: BPF prog-id=220 op=LOAD Jan 16 21:22:19.694000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd147b2c50 a2=94 a3=88 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.694000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.696000 audit: BPF prog-id=221 op=LOAD Jan 16 21:22:19.696000 audit[4796]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd147b2ad0 a2=94 a3=2 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.696000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.696000 audit: BPF prog-id=221 op=UNLOAD Jan 16 21:22:19.696000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd147b2b00 a2=0 a3=7ffd147b2c00 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.696000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.697000 audit: BPF prog-id=220 op=UNLOAD Jan 16 21:22:19.697000 audit[4796]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=6a8d10 a2=0 a3=1a685e8e11d5a9e0 items=0 ppid=4441 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.697000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 21:22:19.700784 containerd[1596]: time="2026-01-16T21:22:19.697962730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:22:19.737505 kubelet[2829]: E0116 21:22:19.737447 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:19.761418 containerd[1596]: time="2026-01-16T21:22:19.758063922Z" level=info msg="StartContainer for \"473deb54847b9c488f13402235100d3dfb9c74f4f5681a70276682da3da5ffc7\" returns successfully" Jan 16 21:22:19.771891 kubelet[2829]: E0116 21:22:19.771845 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:19.800662 containerd[1596]: time="2026-01-16T21:22:19.800306267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:19.807929 containerd[1596]: time="2026-01-16T21:22:19.807635944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:22:19.807929 containerd[1596]: time="2026-01-16T21:22:19.807779192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:19.808217 kubelet[2829]: E0116 21:22:19.807947 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:19.808217 kubelet[2829]: E0116 21:22:19.808004 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:19.808317 kubelet[2829]: E0116 21:22:19.808269 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:19.809962 kubelet[2829]: E0116 21:22:19.809916 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:22:19.831000 audit: BPF prog-id=222 op=LOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9788ff20 a2=98 a3=1999999999999999 items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:19.831000 audit: BPF prog-id=222 op=UNLOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd9788fef0 a3=0 items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:19.831000 audit: BPF prog-id=223 op=LOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9788fe00 a2=94 a3=ffff items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:19.831000 audit: BPF prog-id=223 op=UNLOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9788fe00 a2=94 a3=ffff items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:19.831000 audit: BPF prog-id=224 op=LOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9788fe40 a2=94 a3=7ffd97890020 items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:19.831000 audit: BPF prog-id=224 op=UNLOAD Jan 16 21:22:19.831000 audit[4909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd9788fe40 a2=94 a3=7ffd97890020 items=0 ppid=4441 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:19.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 21:22:20.381373 systemd-networkd[1513]: vxlan.calico: Link UP Jan 16 21:22:20.381388 systemd-networkd[1513]: vxlan.calico: Gained carrier Jan 16 21:22:20.396000 audit[4932]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:20.396000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc111bde0 a2=0 a3=7ffcc111bdcc items=0 ppid=2988 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:20.409000 audit[4932]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:20.409000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc111bde0 a2=0 a3=0 items=0 ppid=2988 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:20.547000 audit: BPF prog-id=225 op=LOAD Jan 16 21:22:20.547000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcccf6bb0 a2=98 a3=0 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.547000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.547000 audit: BPF prog-id=225 op=UNLOAD Jan 16 21:22:20.547000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffcccf6b80 a3=0 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.547000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=226 op=LOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcccf69c0 a2=94 a3=54428f items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=226 op=UNLOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffcccf69c0 a2=94 a3=54428f items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=227 op=LOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcccf69f0 a2=94 a3=2 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=227 op=UNLOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffcccf69f0 a2=0 a3=2 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=228 op=LOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcccf67a0 a2=94 a3=4 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=228 op=UNLOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffcccf67a0 a2=94 a3=4 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=229 op=LOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcccf68a0 a2=94 a3=7fffcccf6a20 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.549000 audit: BPF prog-id=229 op=UNLOAD Jan 16 21:22:20.549000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffcccf68a0 a2=0 a3=7fffcccf6a20 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.549000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.551000 audit: BPF prog-id=230 op=LOAD Jan 16 21:22:20.551000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcccf5fd0 a2=94 a3=2 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.551000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.551000 audit: BPF prog-id=230 op=UNLOAD Jan 16 21:22:20.551000 audit[4948]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffcccf5fd0 a2=0 a3=2 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.551000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.551000 audit: BPF prog-id=231 op=LOAD Jan 16 21:22:20.551000 audit[4948]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcccf60d0 a2=94 a3=30 items=0 ppid=4441 pid=4948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.551000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 21:22:20.591000 audit: BPF prog-id=232 op=LOAD Jan 16 21:22:20.591000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe543f8a50 a2=98 a3=0 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.591000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.591000 audit: BPF prog-id=232 op=UNLOAD Jan 16 21:22:20.591000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe543f8a20 a3=0 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.591000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.592000 audit: BPF prog-id=233 op=LOAD Jan 16 21:22:20.592000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe543f8840 a2=94 a3=54428f items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.592000 audit: BPF prog-id=233 op=UNLOAD Jan 16 21:22:20.592000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe543f8840 a2=94 a3=54428f items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.592000 audit: BPF prog-id=234 op=LOAD Jan 16 21:22:20.592000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe543f8870 a2=94 a3=2 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.592000 audit: BPF prog-id=234 op=UNLOAD Jan 16 21:22:20.592000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe543f8870 a2=0 a3=2 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:20.592000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:20.788226 kubelet[2829]: E0116 21:22:20.785714 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:20.797822 kubelet[2829]: E0116 21:22:20.791729 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:20.797822 kubelet[2829]: E0116 21:22:20.792841 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:22:20.805974 kubelet[2829]: E0116 21:22:20.801060 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:21.033000 audit[4962]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:21.033000 audit[4962]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa0333000 a2=0 a3=7fffa0332fec items=0 ppid=2988 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:21.050000 audit[4962]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:21.050000 audit[4962]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa0333000 a2=0 a3=0 items=0 ppid=2988 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:21.085433 kubelet[2829]: I0116 21:22:21.085328 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6vb67" podStartSLOduration=68.085300436 podStartE2EDuration="1m8.085300436s" podCreationTimestamp="2026-01-16 21:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:22:20.965734849 +0000 UTC m=+71.842765856" watchObservedRunningTime="2026-01-16 21:22:21.085300436 +0000 UTC m=+71.962331473" Jan 16 21:22:21.208000 audit: BPF prog-id=235 op=LOAD Jan 16 21:22:21.208000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe543f8730 a2=94 a3=1 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.208000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.210000 audit: BPF prog-id=235 op=UNLOAD Jan 16 21:22:21.210000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe543f8730 a2=94 a3=1 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.226000 audit: BPF prog-id=236 op=LOAD Jan 16 21:22:21.226000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe543f8720 a2=94 a3=4 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.226000 audit: BPF prog-id=236 op=UNLOAD Jan 16 21:22:21.226000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe543f8720 a2=0 a3=4 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.227000 audit: BPF prog-id=237 op=LOAD Jan 16 21:22:21.227000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe543f8580 a2=94 a3=5 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.227000 audit: BPF prog-id=237 op=UNLOAD Jan 16 21:22:21.227000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe543f8580 a2=0 a3=5 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.227000 audit: BPF prog-id=238 op=LOAD Jan 16 21:22:21.227000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe543f87a0 a2=94 a3=6 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.227000 audit: BPF prog-id=238 op=UNLOAD Jan 16 21:22:21.227000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe543f87a0 a2=0 a3=6 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.228000 audit: BPF prog-id=239 op=LOAD Jan 16 21:22:21.228000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe543f7f50 a2=94 a3=88 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.228000 audit: BPF prog-id=240 op=LOAD Jan 16 21:22:21.228000 audit[4958]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe543f7dd0 a2=94 a3=2 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.228000 audit: BPF prog-id=240 op=UNLOAD Jan 16 21:22:21.228000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe543f7e00 a2=0 a3=7ffe543f7f00 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.229000 audit: BPF prog-id=239 op=UNLOAD Jan 16 21:22:21.229000 audit[4958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3ee18d10 a2=0 a3=1a06bfca5fd7a595 items=0 ppid=4441 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 21:22:21.260000 audit: BPF prog-id=231 op=UNLOAD Jan 16 21:22:21.260000 audit[4441]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0011782c0 a2=0 a3=0 items=0 ppid=4431 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.260000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 21:22:21.738000 audit[4989]: NETFILTER_CFG table=mangle:131 family=2 entries=16 op=nft_register_chain pid=4989 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:21.738000 audit[4989]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff76d08250 a2=0 a3=7fff76d0823c items=0 ppid=4441 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.738000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:21.748000 audit[4988]: NETFILTER_CFG table=nat:132 family=2 entries=15 op=nft_register_chain pid=4988 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:21.748000 audit[4988]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe23e30810 a2=0 a3=7ffe23e307fc items=0 ppid=4441 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.748000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:21.800995 kubelet[2829]: E0116 21:22:21.800728 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:21.830000 audit[4992]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:21.830000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff830822d0 a2=0 a3=7fff830822bc items=0 ppid=4441 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:21.835000 audit[4990]: NETFILTER_CFG table=filter:134 family=2 entries=269 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:21.835000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=158872 a0=3 a1=7fffcfcbde10 a2=0 a3=5568f10d0000 items=0 ppid=4441 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:21.835000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:22.254000 audit[5002]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:22.254000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff60be6a00 a2=0 a3=7fff60be69ec items=0 ppid=2988 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:22.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:22.287000 audit[5002]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:22.287000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff60be6a00 a2=0 a3=7fff60be69ec items=0 ppid=2988 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:22.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:22.426874 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Jan 16 21:22:22.801491 kubelet[2829]: E0116 21:22:22.800721 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:24.446782 kubelet[2829]: E0116 21:22:24.445862 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:24.449631 containerd[1596]: time="2026-01-16T21:22:24.449507977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,}" Jan 16 21:22:25.196267 systemd-networkd[1513]: cali20ec1223276: Link UP Jan 16 21:22:25.200880 systemd-networkd[1513]: cali20ec1223276: Gained carrier Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.744 [INFO][5003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0 coredns-668d6bf9bc- kube-system 35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890 909 0 2026-01-16 21:21:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tzvp2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20ec1223276 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.744 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.910 [INFO][5017] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" HandleID="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Workload="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.910 [INFO][5017] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" HandleID="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Workload="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1b00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tzvp2", "timestamp":"2026-01-16 21:22:24.909995068 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.911 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.911 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.912 [INFO][5017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:24.955 [INFO][5017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.009 [INFO][5017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.053 [INFO][5017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.074 [INFO][5017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.087 [INFO][5017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.087 [INFO][5017] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.096 [INFO][5017] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9 Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.122 [INFO][5017] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.156 [INFO][5017] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.156 [INFO][5017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" host="localhost" Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.156 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:25.264258 containerd[1596]: 2026-01-16 21:22:25.157 [INFO][5017] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" HandleID="k8s-pod-network.95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Workload="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.168 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tzvp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20ec1223276", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.168 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.168 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20ec1223276 ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.202 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.206 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9", Pod:"coredns-668d6bf9bc-tzvp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20ec1223276", MAC:"c6:c5:8b:d4:83:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:25.265493 containerd[1596]: 2026-01-16 21:22:25.236 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-tzvp2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tzvp2-eth0" Jan 16 21:22:25.412000 audit[5040]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:25.432429 kernel: kauditd_printk_skb: 328 callbacks suppressed Jan 16 21:22:25.432638 kernel: audit: type=1325 audit(1768598545.412:708): table=filter:137 family=2 entries=48 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:25.432687 containerd[1596]: time="2026-01-16T21:22:25.425392311Z" level=info msg="connecting to shim 95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9" address="unix:///run/containerd/s/025bd000e06173dac56984ec4ab0c44ae572d76f0df60a9775885c10bc9c6d7b" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:25.412000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffea0ac7940 a2=0 a3=7ffea0ac792c items=0 ppid=4441 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.476254 kernel: audit: type=1300 audit(1768598545.412:708): arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffea0ac7940 a2=0 a3=7ffea0ac792c items=0 ppid=4441 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.476394 containerd[1596]: time="2026-01-16T21:22:25.458888832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,}" Jan 16 21:22:25.412000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:25.521232 kernel: audit: type=1327 audit(1768598545.412:708): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:25.638738 systemd[1]: Started cri-containerd-95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9.scope - libcontainer container 95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9. Jan 16 21:22:25.682000 audit: BPF prog-id=241 op=LOAD Jan 16 21:22:25.691237 kernel: audit: type=1334 audit(1768598545.682:709): prog-id=241 op=LOAD Jan 16 21:22:25.691000 audit: BPF prog-id=242 op=LOAD Jan 16 21:22:25.702376 kernel: audit: type=1334 audit(1768598545.691:710): prog-id=242 op=LOAD Jan 16 21:22:25.702463 kernel: audit: type=1300 audit(1768598545.691:710): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000170238 a2=98 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.691000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000170238 a2=98 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.702501 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:25.758456 kernel: audit: type=1327 audit(1768598545.691:710): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=242 op=UNLOAD Jan 16 21:22:25.776383 kernel: audit: type=1334 audit(1768598545.692:711): prog-id=242 op=UNLOAD Jan 16 21:22:25.776441 kernel: audit: type=1300 audit(1768598545.692:711): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.853409 kernel: audit: type=1327 audit(1768598545.692:711): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=243 op=LOAD Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000170488 a2=98 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=244 op=LOAD Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000170218 a2=98 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=244 op=UNLOAD Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=243 op=UNLOAD Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.692000 audit: BPF prog-id=245 op=LOAD Jan 16 21:22:25.692000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001706e8 a2=98 a3=0 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:25.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935393838613666666332613163383864383837316130366332346138 Jan 16 21:22:25.940991 containerd[1596]: time="2026-01-16T21:22:25.940387751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tzvp2,Uid:35e6cf4c-2c1d-4d9f-ace9-c3378ebf9890,Namespace:kube-system,Attempt:0,} returns sandbox id \"95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9\"" Jan 16 21:22:25.946856 kubelet[2829]: E0116 21:22:25.946792 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:26.014457 containerd[1596]: time="2026-01-16T21:22:26.013956792Z" level=info msg="CreateContainer within sandbox \"95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 21:22:26.159021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250571006.mount: Deactivated successfully. Jan 16 21:22:26.174429 containerd[1596]: time="2026-01-16T21:22:26.172063587Z" level=info msg="Container beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad: CDI devices from CRI Config.CDIDevices: []" Jan 16 21:22:26.245838 containerd[1596]: time="2026-01-16T21:22:26.245060002Z" level=info msg="CreateContainer within sandbox \"95988a6ffc2a1c88d8871a06c24a8f86d67e9cc0bd5ab52bf6a70e6641bb87e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad\"" Jan 16 21:22:26.251777 containerd[1596]: time="2026-01-16T21:22:26.250271677Z" level=info msg="StartContainer for \"beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad\"" Jan 16 21:22:26.276886 containerd[1596]: time="2026-01-16T21:22:26.254290785Z" level=info msg="connecting to shim beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad" address="unix:///run/containerd/s/025bd000e06173dac56984ec4ab0c44ae572d76f0df60a9775885c10bc9c6d7b" protocol=ttrpc version=3 Jan 16 21:22:26.401363 systemd[1]: Started cri-containerd-beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad.scope - libcontainer container beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad. Jan 16 21:22:26.495000 audit: BPF prog-id=246 op=LOAD Jan 16 21:22:26.497000 audit: BPF prog-id=247 op=LOAD Jan 16 21:22:26.497000 audit[5110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.497000 audit: BPF prog-id=247 op=UNLOAD Jan 16 21:22:26.497000 audit[5110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.497000 audit: BPF prog-id=248 op=LOAD Jan 16 21:22:26.497000 audit[5110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.498000 audit: BPF prog-id=249 op=LOAD Jan 16 21:22:26.498000 audit[5110]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.498000 audit: BPF prog-id=249 op=UNLOAD Jan 16 21:22:26.498000 audit[5110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.498000 audit: BPF prog-id=248 op=UNLOAD Jan 16 21:22:26.498000 audit[5110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.498000 audit: BPF prog-id=250 op=LOAD Jan 16 21:22:26.498000 audit[5110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5048 pid=5110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656265356665363635626534316136386637653831663131363333 Jan 16 21:22:26.552446 systemd-networkd[1513]: cali2cba08d0f4a: Link UP Jan 16 21:22:26.555235 systemd-networkd[1513]: cali2cba08d0f4a: Gained carrier Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:25.789 [INFO][5061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0 calico-kube-controllers-66dd98b47c- calico-system fe95499a-0c2a-421c-aaa9-9ead2566d247 908 0 2026-01-16 21:21:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66dd98b47c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66dd98b47c-2sbfh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2cba08d0f4a [] [] }} ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:25.791 [INFO][5061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.069 [INFO][5103] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" HandleID="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Workload="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.074 [INFO][5103] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" HandleID="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Workload="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66dd98b47c-2sbfh", "timestamp":"2026-01-16 21:22:26.06931289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.074 [INFO][5103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.093 [INFO][5103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.093 [INFO][5103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.129 [INFO][5103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.197 [INFO][5103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.244 [INFO][5103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.287 [INFO][5103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.330 [INFO][5103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.339 [INFO][5103] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.380 [INFO][5103] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4 Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.438 [INFO][5103] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.522 [INFO][5103] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.525 [INFO][5103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" host="localhost" Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.525 [INFO][5103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 21:22:26.690359 containerd[1596]: 2026-01-16 21:22:26.525 [INFO][5103] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" HandleID="k8s-pod-network.971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Workload="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.542 [INFO][5061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0", GenerateName:"calico-kube-controllers-66dd98b47c-", Namespace:"calico-system", SelfLink:"", UID:"fe95499a-0c2a-421c-aaa9-9ead2566d247", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66dd98b47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66dd98b47c-2sbfh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cba08d0f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.542 [INFO][5061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.542 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cba08d0f4a ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.563 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.581 [INFO][5061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0", GenerateName:"calico-kube-controllers-66dd98b47c-", Namespace:"calico-system", SelfLink:"", UID:"fe95499a-0c2a-421c-aaa9-9ead2566d247", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 21, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66dd98b47c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4", Pod:"calico-kube-controllers-66dd98b47c-2sbfh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cba08d0f4a", MAC:"96:7c:89:bf:a4:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 21:22:26.697759 containerd[1596]: 2026-01-16 21:22:26.642 [INFO][5061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" Namespace="calico-system" Pod="calico-kube-controllers-66dd98b47c-2sbfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66dd98b47c--2sbfh-eth0" Jan 16 21:22:26.712353 containerd[1596]: time="2026-01-16T21:22:26.712311683Z" level=info msg="StartContainer for \"beebe5fe665be41a68f7e81f11633a688d91ddec4d6760bdccd3d6bf8cf498ad\" returns successfully" Jan 16 21:22:26.872643 containerd[1596]: time="2026-01-16T21:22:26.869409393Z" level=info msg="connecting to shim 971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4" address="unix:///run/containerd/s/51337dec74440827c2a7b4e66847f03f5812f12fd130fe7b9f89a22cb50eddde" namespace=k8s.io protocol=ttrpc version=3 Jan 16 21:22:26.872752 kubelet[2829]: E0116 21:22:26.870700 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:26.916000 audit[5172]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 21:22:26.916000 audit[5172]: SYSCALL arch=c000003e syscall=46 success=yes exit=28352 a0=3 a1=7ffef22e1860 a2=0 a3=7ffef22e184c items=0 ppid=4441 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:26.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 21:22:26.959742 kubelet[2829]: I0116 21:22:26.957061 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tzvp2" podStartSLOduration=73.957039661 podStartE2EDuration="1m13.957039661s" podCreationTimestamp="2026-01-16 21:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 21:22:26.95447436 +0000 UTC m=+77.831505367" watchObservedRunningTime="2026-01-16 21:22:26.957039661 +0000 UTC m=+77.834070667" Jan 16 21:22:27.017466 systemd[1]: Started cri-containerd-971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4.scope - libcontainer container 971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4. Jan 16 21:22:27.034576 systemd-networkd[1513]: cali20ec1223276: Gained IPv6LL Jan 16 21:22:27.046000 audit[5194]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:27.046000 audit[5194]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd1c3939a0 a2=0 a3=7ffd1c39398c items=0 ppid=2988 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:27.054000 audit[5194]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:27.054000 audit[5194]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd1c3939a0 a2=0 a3=7ffd1c39398c items=0 ppid=2988 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:27.078000 audit: BPF prog-id=251 op=LOAD Jan 16 21:22:27.080000 audit: BPF prog-id=252 op=LOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=252 op=UNLOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=253 op=LOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=254 op=LOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=254 op=UNLOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=253 op=UNLOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.080000 audit: BPF prog-id=255 op=LOAD Jan 16 21:22:27.080000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5165 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:27.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937316461306635373866326566626435333835396266613938373336 Jan 16 21:22:27.106182 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 16 21:22:27.246385 containerd[1596]: time="2026-01-16T21:22:27.246051473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66dd98b47c-2sbfh,Uid:fe95499a-0c2a-421c-aaa9-9ead2566d247,Namespace:calico-system,Attempt:0,} returns sandbox id \"971da0f578f2efbd53859bfa98736ba62841ba53079cf61cea0f7ab1f11dafe4\"" Jan 16 21:22:27.253672 containerd[1596]: time="2026-01-16T21:22:27.253066722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:22:27.391354 containerd[1596]: time="2026-01-16T21:22:27.389824582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:27.396347 containerd[1596]: time="2026-01-16T21:22:27.395269604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:22:27.396347 containerd[1596]: time="2026-01-16T21:22:27.395369470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:27.396496 kubelet[2829]: E0116 21:22:27.395903 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:27.396496 kubelet[2829]: E0116 21:22:27.395954 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:27.397744 kubelet[2829]: E0116 21:22:27.397609 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw7zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:27.400261 kubelet[2829]: E0116 21:22:27.400047 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:27.806861 systemd-networkd[1513]: cali2cba08d0f4a: Gained IPv6LL Jan 16 21:22:27.893192 kubelet[2829]: E0116 21:22:27.893054 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:27.897387 kubelet[2829]: E0116 21:22:27.897318 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:28.020000 audit[5209]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:28.020000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe693a110 a2=0 a3=7fffe693a0fc items=0 ppid=2988 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:28.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:28.091000 audit[5209]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:22:28.091000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffe693a110 a2=0 a3=7fffe693a0fc items=0 ppid=2988 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:28.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:22:28.444450 kubelet[2829]: E0116 21:22:28.443511 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:28.900319 kubelet[2829]: E0116 21:22:28.900271 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:28.902176 kubelet[2829]: E0116 21:22:28.901454 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:29.455171 containerd[1596]: time="2026-01-16T21:22:29.453008880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:29.572785 containerd[1596]: time="2026-01-16T21:22:29.572590923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:29.579839 containerd[1596]: time="2026-01-16T21:22:29.577011328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:29.579839 containerd[1596]: time="2026-01-16T21:22:29.577195912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:29.580037 kubelet[2829]: E0116 21:22:29.577364 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:29.580037 kubelet[2829]: E0116 21:22:29.577423 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:29.580037 kubelet[2829]: E0116 21:22:29.577647 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77ptz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:29.580037 kubelet[2829]: E0116 21:22:29.579617 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:29.904721 kubelet[2829]: E0116 21:22:29.904594 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:30.444230 kubelet[2829]: E0116 21:22:30.444009 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:31.445953 kubelet[2829]: E0116 21:22:31.445517 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:31.450239 containerd[1596]: time="2026-01-16T21:22:31.450012240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:31.554305 containerd[1596]: time="2026-01-16T21:22:31.554209925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:31.561996 containerd[1596]: time="2026-01-16T21:22:31.559755870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:31.561996 containerd[1596]: time="2026-01-16T21:22:31.560031654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:31.562374 kubelet[2829]: E0116 21:22:31.561466 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:31.562374 kubelet[2829]: E0116 21:22:31.561522 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:31.562374 kubelet[2829]: E0116 21:22:31.561924 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czw4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:31.564993 kubelet[2829]: E0116 21:22:31.564940 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:35.482171 containerd[1596]: time="2026-01-16T21:22:35.481795908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:22:35.596508 containerd[1596]: time="2026-01-16T21:22:35.596270040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:35.600367 containerd[1596]: time="2026-01-16T21:22:35.600304409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:35.600491 containerd[1596]: time="2026-01-16T21:22:35.600315575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:22:35.605164 kubelet[2829]: E0116 21:22:35.601226 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:35.605164 kubelet[2829]: E0116 21:22:35.601317 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:22:35.605164 kubelet[2829]: E0116 21:22:35.602021 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zkb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:35.605927 containerd[1596]: time="2026-01-16T21:22:35.603736369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:22:35.611158 kubelet[2829]: E0116 21:22:35.609798 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:35.707148 containerd[1596]: time="2026-01-16T21:22:35.706635098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:35.717762 containerd[1596]: time="2026-01-16T21:22:35.717632212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:22:35.718016 containerd[1596]: time="2026-01-16T21:22:35.717770590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:35.718148 kubelet[2829]: E0116 21:22:35.717962 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:35.718148 kubelet[2829]: E0116 21:22:35.718014 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:35.718293 kubelet[2829]: E0116 21:22:35.718257 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4a36d8bfc8d44428963d068adc3adb01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:35.719116 containerd[1596]: time="2026-01-16T21:22:35.718862355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:22:35.786868 containerd[1596]: time="2026-01-16T21:22:35.786736921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:35.795932 containerd[1596]: time="2026-01-16T21:22:35.795862198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:35.795932 containerd[1596]: time="2026-01-16T21:22:35.795938180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:22:35.798153 kubelet[2829]: E0116 21:22:35.797533 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:35.798153 kubelet[2829]: E0116 21:22:35.797648 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:22:35.798153 kubelet[2829]: E0116 21:22:35.797906 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:35.799064 containerd[1596]: time="2026-01-16T21:22:35.798831916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:22:35.876467 containerd[1596]: time="2026-01-16T21:22:35.876362864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:35.880586 containerd[1596]: time="2026-01-16T21:22:35.880333043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:35.880586 containerd[1596]: time="2026-01-16T21:22:35.880385862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:22:35.884417 kubelet[2829]: E0116 21:22:35.882047 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:35.884417 kubelet[2829]: E0116 21:22:35.882180 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:35.884417 kubelet[2829]: E0116 21:22:35.882708 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:35.884417 kubelet[2829]: E0116 21:22:35.883842 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:22:35.884982 containerd[1596]: time="2026-01-16T21:22:35.884025464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:22:35.964397 containerd[1596]: time="2026-01-16T21:22:35.963907325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:35.971172 containerd[1596]: time="2026-01-16T21:22:35.970963314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:22:35.971172 containerd[1596]: time="2026-01-16T21:22:35.971150253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:35.975642 kubelet[2829]: E0116 21:22:35.975413 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:35.975642 kubelet[2829]: E0116 21:22:35.975503 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:22:35.975807 kubelet[2829]: E0116 21:22:35.975680 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:35.978004 kubelet[2829]: E0116 21:22:35.976794 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:37.362938 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:39062.service - OpenSSH per-connection server daemon (10.0.0.1:39062). Jan 16 21:22:37.383811 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 16 21:22:37.383896 kernel: audit: type=1130 audit(1768598557.363:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:39062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:37.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:39062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:37.587000 audit[5226]: USER_ACCT pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.589717 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 39062 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:22:37.595608 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:37.588000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.617244 systemd-logind[1575]: New session 9 of user core. Jan 16 21:22:37.623590 kernel: audit: type=1101 audit(1768598557.587:739): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.623662 kernel: audit: type=1103 audit(1768598557.588:740): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.623700 kernel: audit: type=1006 audit(1768598557.588:741): pid=5226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 16 21:22:37.633465 kernel: audit: type=1300 audit(1768598557.588:741): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6564ade0 a2=3 a3=0 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:37.588000 audit[5226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6564ade0 a2=3 a3=0 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:37.635622 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 21:22:37.588000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:37.658919 kernel: audit: type=1327 audit(1768598557.588:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:37.641000 audit[5226]: USER_START pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.677230 kernel: audit: type=1105 audit(1768598557.641:742): pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.646000 audit[5230]: CRED_ACQ pid=5230 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.690418 kernel: audit: type=1103 audit(1768598557.646:743): pid=5230 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.838647 sshd[5230]: Connection closed by 10.0.0.1 port 39062 Jan 16 21:22:37.839071 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:37.844000 audit[5226]: USER_END pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.845000 audit[5226]: CRED_DISP pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.866050 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:39062.service: Deactivated successfully. Jan 16 21:22:37.869722 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 21:22:37.872803 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Jan 16 21:22:37.875760 systemd-logind[1575]: Removed session 9. Jan 16 21:22:37.877991 kernel: audit: type=1106 audit(1768598557.844:744): pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.878069 kernel: audit: type=1104 audit(1768598557.845:745): pid=5226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:37.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.59:22-10.0.0.1:39062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:38.442970 kubelet[2829]: E0116 21:22:38.442581 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:41.450927 containerd[1596]: time="2026-01-16T21:22:41.450222889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:22:41.590882 containerd[1596]: time="2026-01-16T21:22:41.589873180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:41.597725 containerd[1596]: time="2026-01-16T21:22:41.596829564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:22:41.597725 containerd[1596]: time="2026-01-16T21:22:41.596956702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:41.597900 kubelet[2829]: E0116 21:22:41.597263 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:41.597900 kubelet[2829]: E0116 21:22:41.597317 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:22:41.597900 kubelet[2829]: E0116 21:22:41.597449 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw7zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:41.599811 kubelet[2829]: E0116 21:22:41.599763 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:42.443535 kubelet[2829]: E0116 21:22:42.443398 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:42.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:42.888766 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Jan 16 21:22:42.896644 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:42.896736 kernel: audit: type=1130 audit(1768598562.887:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:43.028000 audit[5254]: USER_ACCT pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.030338 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:22:43.034714 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:43.032000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.053664 systemd-logind[1575]: New session 10 of user core. Jan 16 21:22:43.070320 kernel: audit: type=1101 audit(1768598563.028:748): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.070424 kernel: audit: type=1103 audit(1768598563.032:749): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.032000 audit[5254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef023690 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:43.097214 kernel: audit: type=1006 audit(1768598563.032:750): pid=5254 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 16 21:22:43.097539 kernel: audit: type=1300 audit(1768598563.032:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef023690 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:43.098403 kernel: audit: type=1327 audit(1768598563.032:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:43.032000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:43.107861 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 21:22:43.115000 audit[5254]: USER_START pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.120000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.148862 kernel: audit: type=1105 audit(1768598563.115:751): pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.148970 kernel: audit: type=1103 audit(1768598563.120:752): pid=5258 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.320718 sshd[5258]: Connection closed by 10.0.0.1 port 40746 Jan 16 21:22:43.320473 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:43.328000 audit[5254]: USER_END pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.334647 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:40746.service: Deactivated successfully. Jan 16 21:22:43.344877 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 21:22:43.354919 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Jan 16 21:22:43.328000 audit[5254]: CRED_DISP pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.364248 systemd-logind[1575]: Removed session 10. Jan 16 21:22:43.384865 kernel: audit: type=1106 audit(1768598563.328:753): pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.384990 kernel: audit: type=1104 audit(1768598563.328:754): pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:43.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.59:22-10.0.0.1:40746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:43.653766 kubelet[2829]: E0116 21:22:43.652827 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:46.445695 kubelet[2829]: E0116 21:22:46.445037 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:47.473956 kubelet[2829]: E0116 21:22:47.473731 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:22:48.355207 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:40748.service - OpenSSH per-connection server daemon (10.0.0.1:40748). Jan 16 21:22:48.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:40748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:48.369425 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:48.369484 kernel: audit: type=1130 audit(1768598568.358:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:40748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:48.538310 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 40748 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:22:48.536000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.544288 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:48.540000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.580821 kernel: audit: type=1101 audit(1768598568.536:757): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.580887 kernel: audit: type=1103 audit(1768598568.540:758): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.587040 systemd-logind[1575]: New session 11 of user core. Jan 16 21:22:48.611224 kernel: audit: type=1006 audit(1768598568.540:759): pid=5307 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 21:22:48.611353 kernel: audit: type=1300 audit(1768598568.540:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fb82ea0 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:48.540000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0fb82ea0 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:48.540000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:48.645735 kernel: audit: type=1327 audit(1768598568.540:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:48.657941 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 21:22:48.692000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.719396 kernel: audit: type=1105 audit(1768598568.692:760): pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.699000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:48.745217 kernel: audit: type=1103 audit(1768598568.699:761): pid=5311 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:49.000354 sshd[5311]: Connection closed by 10.0.0.1 port 40748 Jan 16 21:22:49.002891 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:49.006000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:49.012026 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:40748.service: Deactivated successfully. Jan 16 21:22:49.017897 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 21:22:49.020653 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Jan 16 21:22:49.027055 systemd-logind[1575]: Removed session 11. Jan 16 21:22:49.006000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:49.087050 kernel: audit: type=1106 audit(1768598569.006:762): pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:49.087340 kernel: audit: type=1104 audit(1768598569.006:763): pid=5307 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:49.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.59:22-10.0.0.1:40748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:49.459302 kubelet[2829]: E0116 21:22:49.451474 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:22:50.453774 kubelet[2829]: E0116 21:22:50.447910 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:22:52.448251 kubelet[2829]: E0116 21:22:52.448025 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:22:54.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:47804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:54.035216 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Jan 16 21:22:54.047363 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:54.047423 kernel: audit: type=1130 audit(1768598574.034:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:47804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:54.225941 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:22:54.230187 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:54.223000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.261171 kernel: audit: type=1101 audit(1768598574.223:766): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.225000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.265788 systemd-logind[1575]: New session 12 of user core. Jan 16 21:22:54.292418 kernel: audit: type=1103 audit(1768598574.225:767): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.292656 kernel: audit: type=1006 audit(1768598574.225:768): pid=5325 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 16 21:22:54.292687 kernel: audit: type=1300 audit(1768598574.225:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff095f740 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:54.225000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff095f740 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:54.225000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:54.327844 kernel: audit: type=1327 audit(1768598574.225:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:54.329471 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 21:22:54.359743 update_engine[1577]: I20260116 21:22:54.359579 1577 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 16 21:22:54.359743 update_engine[1577]: I20260116 21:22:54.359694 1577 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.364871 1577 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.365829 1577 omaha_request_params.cc:62] Current group set to developer Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366221 1577 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366242 1577 update_attempter.cc:643] Scheduling an action processor start. Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366308 1577 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366380 1577 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366461 1577 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366474 1577 omaha_request_action.cc:272] Request: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: Jan 16 21:22:54.367820 update_engine[1577]: I20260116 21:22:54.366484 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 21:22:54.368000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.374180 update_engine[1577]: I20260116 21:22:54.373811 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 21:22:54.376837 update_engine[1577]: I20260116 21:22:54.374933 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 21:22:54.391217 kernel: audit: type=1105 audit(1768598574.368:769): pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.393000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.400705 update_engine[1577]: E20260116 21:22:54.400417 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 21:22:54.400705 update_engine[1577]: I20260116 21:22:54.400645 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 16 21:22:54.420961 kernel: audit: type=1103 audit(1768598574.393:770): pid=5329 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.433923 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 16 21:22:54.446470 containerd[1596]: time="2026-01-16T21:22:54.446305008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:54.545700 containerd[1596]: time="2026-01-16T21:22:54.544339410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:54.563629 containerd[1596]: time="2026-01-16T21:22:54.560898135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:54.563629 containerd[1596]: time="2026-01-16T21:22:54.561056119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:54.563835 kubelet[2829]: E0116 21:22:54.562930 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:54.563835 kubelet[2829]: E0116 21:22:54.562994 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:54.566053 kubelet[2829]: E0116 21:22:54.565955 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77ptz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:54.568958 kubelet[2829]: E0116 21:22:54.568334 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:22:54.623710 sshd[5329]: Connection closed by 10.0.0.1 port 47804 Jan 16 21:22:54.625051 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Jan 16 21:22:54.629000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.638821 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:47804.service: Deactivated successfully. Jan 16 21:22:54.644063 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 21:22:54.651736 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Jan 16 21:22:54.664601 kernel: audit: type=1106 audit(1768598574.629:771): pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.664228 systemd-logind[1575]: Removed session 12. Jan 16 21:22:54.630000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:54.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.59:22-10.0.0.1:47804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:54.682212 kernel: audit: type=1104 audit(1768598574.630:772): pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:56.444961 kubelet[2829]: E0116 21:22:56.444407 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:22:59.494930 containerd[1596]: time="2026-01-16T21:22:59.494481538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:22:59.598176 containerd[1596]: time="2026-01-16T21:22:59.598050042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:59.604938 containerd[1596]: time="2026-01-16T21:22:59.604001495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:22:59.604938 containerd[1596]: time="2026-01-16T21:22:59.604238167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:59.607053 kubelet[2829]: E0116 21:22:59.606161 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:59.607053 kubelet[2829]: E0116 21:22:59.606250 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:22:59.607053 kubelet[2829]: E0116 21:22:59.606505 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czw4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:59.611312 kubelet[2829]: E0116 21:22:59.609779 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:22:59.611360 containerd[1596]: time="2026-01-16T21:22:59.609365154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:22:59.672452 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:47810.service - OpenSSH per-connection server daemon (10.0.0.1:47810). Jan 16 21:22:59.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:59.705240 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:22:59.705383 kernel: audit: type=1130 audit(1768598579.671:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:22:59.746403 containerd[1596]: time="2026-01-16T21:22:59.744732600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:59.753659 containerd[1596]: time="2026-01-16T21:22:59.752947037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:22:59.753659 containerd[1596]: time="2026-01-16T21:22:59.753034981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:59.753833 kubelet[2829]: E0116 21:22:59.753264 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:59.753833 kubelet[2829]: E0116 21:22:59.753317 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:22:59.753833 kubelet[2829]: E0116 21:22:59.753439 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4a36d8bfc8d44428963d068adc3adb01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:59.763212 containerd[1596]: time="2026-01-16T21:22:59.761462957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:22:59.853040 containerd[1596]: time="2026-01-16T21:22:59.852909905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:22:59.870897 containerd[1596]: time="2026-01-16T21:22:59.868873192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:22:59.870897 containerd[1596]: time="2026-01-16T21:22:59.868994549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:22:59.871219 kubelet[2829]: E0116 21:22:59.869603 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:59.871219 kubelet[2829]: E0116 21:22:59.869670 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:22:59.871219 kubelet[2829]: E0116 21:22:59.869816 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:22:59.871527 kubelet[2829]: E0116 21:22:59.871411 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:22:59.918000 audit[5346]: USER_ACCT pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.929393 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 47810 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:22:59.930020 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:22:59.927000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.947179 systemd-logind[1575]: New session 13 of user core. Jan 16 21:22:59.968730 kernel: audit: type=1101 audit(1768598579.918:775): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.968878 kernel: audit: type=1103 audit(1768598579.927:776): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:22:59.983237 kernel: audit: type=1006 audit(1768598579.927:777): pid=5346 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 16 21:22:59.983377 kernel: audit: type=1300 audit(1768598579.927:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd097718d0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:22:59.927000 audit[5346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd097718d0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:00.009601 kernel: audit: type=1327 audit(1768598579.927:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:22:59.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:00.021900 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 21:23:00.029000 audit[5346]: USER_START pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.037000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.074383 kernel: audit: type=1105 audit(1768598580.029:778): pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.074503 kernel: audit: type=1103 audit(1768598580.037:779): pid=5350 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.343858 sshd[5350]: Connection closed by 10.0.0.1 port 47810 Jan 16 21:23:00.344465 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:00.349000 audit[5346]: USER_END pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.354840 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:47810.service: Deactivated successfully. Jan 16 21:23:00.365742 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 21:23:00.369403 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Jan 16 21:23:00.378183 systemd-logind[1575]: Removed session 13. Jan 16 21:23:00.387204 kernel: audit: type=1106 audit(1768598580.349:780): pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.387312 kernel: audit: type=1104 audit(1768598580.349:781): pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.349000 audit[5346]: CRED_DISP pid=5346 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:00.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.59:22-10.0.0.1:47810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:01.447849 containerd[1596]: time="2026-01-16T21:23:01.447748146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:23:01.540878 containerd[1596]: time="2026-01-16T21:23:01.540736992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:01.546899 containerd[1596]: time="2026-01-16T21:23:01.546781782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:23:01.546899 containerd[1596]: time="2026-01-16T21:23:01.546882219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:01.547242 kubelet[2829]: E0116 21:23:01.547048 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:23:01.547763 kubelet[2829]: E0116 21:23:01.547250 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:23:01.547763 kubelet[2829]: E0116 21:23:01.547404 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zkb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:01.553666 kubelet[2829]: E0116 21:23:01.550449 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:23:02.450158 containerd[1596]: time="2026-01-16T21:23:02.449983924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:23:02.516467 containerd[1596]: time="2026-01-16T21:23:02.516335917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:02.522512 containerd[1596]: time="2026-01-16T21:23:02.520783432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:23:02.522512 containerd[1596]: time="2026-01-16T21:23:02.520883727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:02.522741 kubelet[2829]: E0116 21:23:02.521050 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:23:02.522741 kubelet[2829]: E0116 21:23:02.521210 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:23:02.522741 kubelet[2829]: E0116 21:23:02.521345 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:02.530852 containerd[1596]: time="2026-01-16T21:23:02.530772661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:23:02.613967 containerd[1596]: time="2026-01-16T21:23:02.613598262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:02.626152 containerd[1596]: time="2026-01-16T21:23:02.624253745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:23:02.626152 containerd[1596]: time="2026-01-16T21:23:02.624384770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:02.626501 kubelet[2829]: E0116 21:23:02.624609 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:23:02.626501 kubelet[2829]: E0116 21:23:02.624676 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:23:02.626501 kubelet[2829]: E0116 21:23:02.624829 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:02.627536 kubelet[2829]: E0116 21:23:02.627495 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:23:03.448218 containerd[1596]: time="2026-01-16T21:23:03.447992657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:23:03.546727 containerd[1596]: time="2026-01-16T21:23:03.546440124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:03.552806 containerd[1596]: time="2026-01-16T21:23:03.552660735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:23:03.553510 containerd[1596]: time="2026-01-16T21:23:03.552992436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:03.555799 kubelet[2829]: E0116 21:23:03.554161 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:23:03.555799 kubelet[2829]: E0116 21:23:03.554273 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:23:03.555799 kubelet[2829]: E0116 21:23:03.554655 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw7zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:03.561245 kubelet[2829]: E0116 21:23:03.560836 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:23:04.282061 update_engine[1577]: I20260116 21:23:04.281247 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 21:23:04.282061 update_engine[1577]: I20260116 21:23:04.281356 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 21:23:04.282061 update_engine[1577]: I20260116 21:23:04.281944 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 21:23:04.303258 update_engine[1577]: E20260116 21:23:04.302973 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 21:23:04.303258 update_engine[1577]: I20260116 21:23:04.303202 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 16 21:23:05.369246 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:42286.service - OpenSSH per-connection server daemon (10.0.0.1:42286). Jan 16 21:23:05.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:42286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:05.375504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:05.375649 kernel: audit: type=1130 audit(1768598585.368:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:42286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:05.561000 audit[5371]: USER_ACCT pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.563837 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 42286 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:05.572441 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:05.568000 audit[5371]: CRED_ACQ pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.600298 kernel: audit: type=1101 audit(1768598585.561:784): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.600397 kernel: audit: type=1103 audit(1768598585.568:785): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.605184 systemd-logind[1575]: New session 14 of user core. Jan 16 21:23:05.612711 kernel: audit: type=1006 audit(1768598585.568:786): pid=5371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 16 21:23:05.568000 audit[5371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a68d110 a2=3 a3=0 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:05.629810 kernel: audit: type=1300 audit(1768598585.568:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a68d110 a2=3 a3=0 items=0 ppid=1 pid=5371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:05.629905 kernel: audit: type=1327 audit(1768598585.568:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:05.568000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:05.639625 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 21:23:05.646000 audit[5371]: USER_START pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.650000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.710134 kernel: audit: type=1105 audit(1768598585.646:787): pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.711017 kernel: audit: type=1103 audit(1768598585.650:788): pid=5375 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.843707 sshd[5375]: Connection closed by 10.0.0.1 port 42286 Jan 16 21:23:05.847658 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:05.878000 audit[5371]: USER_END pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.891153 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:42286.service: Deactivated successfully. Jan 16 21:23:05.896334 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 21:23:05.900208 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Jan 16 21:23:05.908726 systemd-logind[1575]: Removed session 14. Jan 16 21:23:05.914791 kernel: audit: type=1106 audit(1768598585.878:789): pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.914928 kernel: audit: type=1104 audit(1768598585.878:790): pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.878000 audit[5371]: CRED_DISP pid=5371 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:05.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.59:22-10.0.0.1:42286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:06.446968 kubelet[2829]: E0116 21:23:06.446920 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:23:10.450360 kubelet[2829]: E0116 21:23:10.450162 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:23:10.455538 kubelet[2829]: E0116 21:23:10.452049 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:23:10.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:42302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:10.885723 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:42302.service - OpenSSH per-connection server daemon (10.0.0.1:42302). Jan 16 21:23:10.890611 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:10.890681 kernel: audit: type=1130 audit(1768598590.884:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:42302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:11.051182 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:11.049000 audit[5394]: USER_ACCT pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.055714 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:11.081496 systemd-logind[1575]: New session 15 of user core. Jan 16 21:23:11.088823 kernel: audit: type=1101 audit(1768598591.049:793): pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.088918 kernel: audit: type=1103 audit(1768598591.051:794): pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.051000 audit[5394]: CRED_ACQ pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.104481 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 21:23:11.112180 kernel: audit: type=1006 audit(1768598591.051:795): pid=5394 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 16 21:23:11.051000 audit[5394]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8391cd50 a2=3 a3=0 items=0 ppid=1 pid=5394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:11.129022 kernel: audit: type=1300 audit(1768598591.051:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8391cd50 a2=3 a3=0 items=0 ppid=1 pid=5394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:11.129239 kernel: audit: type=1327 audit(1768598591.051:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:11.051000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:11.118000 audit[5394]: USER_START pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.177791 kernel: audit: type=1105 audit(1768598591.118:796): pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.127000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.205336 kernel: audit: type=1103 audit(1768598591.127:797): pid=5398 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.381923 sshd[5398]: Connection closed by 10.0.0.1 port 42302 Jan 16 21:23:11.382432 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:11.382000 audit[5394]: USER_END pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.389052 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:42302.service: Deactivated successfully. Jan 16 21:23:11.394688 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 21:23:11.398903 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Jan 16 21:23:11.402362 systemd-logind[1575]: Removed session 15. Jan 16 21:23:11.411392 kernel: audit: type=1106 audit(1768598591.382:798): pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.382000 audit[5394]: CRED_DISP pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:11.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.59:22-10.0.0.1:42302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:11.427304 kernel: audit: type=1104 audit(1768598591.382:799): pid=5394 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:14.282536 update_engine[1577]: I20260116 21:23:14.281291 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 21:23:14.282536 update_engine[1577]: I20260116 21:23:14.281558 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 21:23:14.282536 update_engine[1577]: I20260116 21:23:14.282382 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 21:23:14.303955 update_engine[1577]: E20260116 21:23:14.303821 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 21:23:14.304178 update_engine[1577]: I20260116 21:23:14.303974 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 16 21:23:14.449138 kubelet[2829]: E0116 21:23:14.446638 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:23:16.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:59980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:16.434797 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Jan 16 21:23:16.441524 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:16.441658 kernel: audit: type=1130 audit(1768598596.432:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:59980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:16.450945 kubelet[2829]: E0116 21:23:16.448641 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:23:16.592000 audit[5441]: USER_ACCT pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.598282 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:16.597962 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:16.614352 systemd-logind[1575]: New session 16 of user core. Jan 16 21:23:16.595000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.631958 kernel: audit: type=1101 audit(1768598596.592:802): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.632046 kernel: audit: type=1103 audit(1768598596.595:803): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.632179 kernel: audit: type=1006 audit(1768598596.595:804): pid=5441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 21:23:16.595000 audit[5441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ca2df50 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:16.660393 kernel: audit: type=1300 audit(1768598596.595:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ca2df50 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:16.595000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:16.670841 kernel: audit: type=1327 audit(1768598596.595:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:16.669949 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 21:23:16.679000 audit[5441]: USER_START pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.704288 kernel: audit: type=1105 audit(1768598596.679:805): pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.704419 kernel: audit: type=1103 audit(1768598596.684:806): pid=5445 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.684000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.941231 sshd[5445]: Connection closed by 10.0.0.1 port 59980 Jan 16 21:23:16.938492 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:16.953000 audit[5441]: USER_END pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.968564 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:59996.service - OpenSSH per-connection server daemon (10.0.0.1:59996). Jan 16 21:23:16.979740 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:59980.service: Deactivated successfully. Jan 16 21:23:16.983701 kernel: audit: type=1106 audit(1768598596.953:807): pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.953000 audit[5441]: CRED_DISP pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.988734 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 21:23:16.996342 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Jan 16 21:23:16.999351 systemd-logind[1575]: Removed session 16. Jan 16 21:23:17.003319 kernel: audit: type=1104 audit(1768598596.953:808): pid=5441 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:16.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:59996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:16.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.59:22-10.0.0.1:59980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:17.113000 audit[5456]: USER_ACCT pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.118910 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 59996 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:17.117000 audit[5456]: CRED_ACQ pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.118000 audit[5456]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed362dcb0 a2=3 a3=0 items=0 ppid=1 pid=5456 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:17.118000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:17.120945 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:17.135655 systemd-logind[1575]: New session 17 of user core. Jan 16 21:23:17.142184 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 21:23:17.151000 audit[5456]: USER_START pid=5456 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.162000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.429883 sshd[5463]: Connection closed by 10.0.0.1 port 59996 Jan 16 21:23:17.432016 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:17.436000 audit[5456]: USER_END pid=5456 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.436000 audit[5456]: CRED_DISP pid=5456 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.443684 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:59996.service: Deactivated successfully. Jan 16 21:23:17.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.59:22-10.0.0.1:59996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:17.446941 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 21:23:17.451804 kubelet[2829]: E0116 21:23:17.451698 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:23:17.452069 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Jan 16 21:23:17.467531 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Jan 16 21:23:17.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:17.470516 systemd-logind[1575]: Removed session 17. Jan 16 21:23:17.597000 audit[5475]: USER_ACCT pid=5475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.599281 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:17.599000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.600000 audit[5475]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7bd3a7e0 a2=3 a3=0 items=0 ppid=1 pid=5475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:17.600000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:17.603478 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:17.616575 systemd-logind[1575]: New session 18 of user core. Jan 16 21:23:17.637350 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 21:23:17.644000 audit[5475]: USER_START pid=5475 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.652000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.866887 sshd[5479]: Connection closed by 10.0.0.1 port 60006 Jan 16 21:23:17.863181 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:17.866000 audit[5475]: USER_END pid=5475 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.869000 audit[5475]: CRED_DISP pid=5475 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:17.878513 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:60006.service: Deactivated successfully. Jan 16 21:23:17.887048 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 21:23:17.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.59:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:17.892185 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Jan 16 21:23:17.905678 systemd-logind[1575]: Removed session 18. Jan 16 21:23:20.445965 kubelet[2829]: E0116 21:23:20.445393 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:23:21.444369 kubelet[2829]: E0116 21:23:21.443803 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:23:22.446506 kubelet[2829]: E0116 21:23:22.446415 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:23:22.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:22.892559 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:58064.service - OpenSSH per-connection server daemon (10.0.0.1:58064). Jan 16 21:23:22.907150 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 21:23:22.907245 kernel: audit: type=1130 audit(1768598602.892:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:23.019000 audit[5493]: USER_ACCT pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.020718 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:23.034070 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:23.047179 kernel: audit: type=1101 audit(1768598603.019:829): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.025000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.061948 systemd-logind[1575]: New session 19 of user core. Jan 16 21:23:23.085234 kernel: audit: type=1103 audit(1768598603.025:830): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.085321 kernel: audit: type=1006 audit(1768598603.025:831): pid=5493 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 16 21:23:23.085339 kernel: audit: type=1300 audit(1768598603.025:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3a350100 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:23.025000 audit[5493]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3a350100 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:23.104271 kernel: audit: type=1327 audit(1768598603.025:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:23.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:23.107771 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 21:23:23.115300 kernel: audit: type=1105 audit(1768598603.112:832): pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.112000 audit[5493]: USER_START pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.138788 kernel: audit: type=1103 audit(1768598603.119:833): pid=5497 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.119000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.351577 sshd[5497]: Connection closed by 10.0.0.1 port 58064 Jan 16 21:23:23.350375 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:23.351000 audit[5493]: USER_END pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.361920 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:58064.service: Deactivated successfully. Jan 16 21:23:23.367853 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 21:23:23.371208 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Jan 16 21:23:23.383406 systemd-logind[1575]: Removed session 19. Jan 16 21:23:23.351000 audit[5493]: CRED_DISP pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.400904 kernel: audit: type=1106 audit(1768598603.351:834): pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.400984 kernel: audit: type=1104 audit(1768598603.351:835): pid=5493 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:23.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.59:22-10.0.0.1:58064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:24.280886 update_engine[1577]: I20260116 21:23:24.280812 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 21:23:24.282325 update_engine[1577]: I20260116 21:23:24.281530 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 21:23:24.282325 update_engine[1577]: I20260116 21:23:24.282217 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 21:23:24.310864 update_engine[1577]: E20260116 21:23:24.306929 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307410 1577 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307435 1577 omaha_request_action.cc:617] Omaha request response: Jan 16 21:23:24.310864 update_engine[1577]: E20260116 21:23:24.307569 1577 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307600 1577 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307687 1577 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307696 1577 update_attempter.cc:306] Processing Done. Jan 16 21:23:24.310864 update_engine[1577]: E20260116 21:23:24.307718 1577 update_attempter.cc:619] Update failed. Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307727 1577 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307735 1577 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 16 21:23:24.310864 update_engine[1577]: I20260116 21:23:24.307744 1577 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.316412 1577 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.316491 1577 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.316507 1577 omaha_request_action.cc:272] Request: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.316520 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.316566 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 21:23:24.317252 update_engine[1577]: I20260116 21:23:24.317055 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 21:23:24.321998 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 16 21:23:24.342028 update_engine[1577]: E20260116 21:23:24.339903 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340178 1577 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340201 1577 omaha_request_action.cc:617] Omaha request response: Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340215 1577 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340225 1577 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340234 1577 update_attempter.cc:306] Processing Done. Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340246 1577 update_attempter.cc:310] Error event sent. Jan 16 21:23:24.342028 update_engine[1577]: I20260116 21:23:24.340262 1577 update_check_scheduler.cc:74] Next update check in 43m19s Jan 16 21:23:24.342422 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 16 21:23:25.448914 kubelet[2829]: E0116 21:23:25.446837 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:27.461248 kubelet[2829]: E0116 21:23:27.460297 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:23:28.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:58076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:28.428027 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Jan 16 21:23:28.471242 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:28.471331 kernel: audit: type=1130 audit(1768598608.427:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:58076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:28.696000 audit[5515]: USER_ACCT pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.741475 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:28.742991 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:28.763831 kernel: audit: type=1101 audit(1768598608.696:838): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.716000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.801253 systemd-logind[1575]: New session 20 of user core. Jan 16 21:23:28.833286 kernel: audit: type=1103 audit(1768598608.716:839): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.833389 kernel: audit: type=1006 audit(1768598608.722:840): pid=5515 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 16 21:23:28.722000 audit[5515]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc59b5be0 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:28.879859 kernel: audit: type=1300 audit(1768598608.722:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc59b5be0 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:28.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:28.889616 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 21:23:28.924893 kernel: audit: type=1327 audit(1768598608.722:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:28.925884 kernel: audit: type=1105 audit(1768598608.907:841): pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.907000 audit[5515]: USER_START pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:28.912000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.038232 kernel: audit: type=1103 audit(1768598608.912:842): pid=5519 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.473463 sshd[5519]: Connection closed by 10.0.0.1 port 58076 Jan 16 21:23:29.471357 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:29.478000 audit[5515]: USER_END pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.498345 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Jan 16 21:23:29.503898 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:58076.service: Deactivated successfully. Jan 16 21:23:29.527360 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 21:23:29.532005 kernel: audit: type=1106 audit(1768598609.478:843): pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.532054 kernel: audit: type=1104 audit(1768598609.478:844): pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.478000 audit[5515]: CRED_DISP pid=5515 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:29.547542 systemd-logind[1575]: Removed session 20. Jan 16 21:23:29.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.59:22-10.0.0.1:58076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:30.446763 kubelet[2829]: E0116 21:23:30.446692 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:23:31.465726 kubelet[2829]: E0116 21:23:31.465268 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:23:32.471040 kubelet[2829]: E0116 21:23:32.469556 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:23:33.458440 kubelet[2829]: E0116 21:23:33.450848 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:23:34.446060 kubelet[2829]: E0116 21:23:34.442812 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:34.523399 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:56696.service - OpenSSH per-connection server daemon (10.0.0.1:56696). Jan 16 21:23:34.572460 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:34.572600 kernel: audit: type=1130 audit(1768598614.515:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:34.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:34.999470 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 56696 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:34.997000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.018488 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:35.006000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.087790 systemd-logind[1575]: New session 21 of user core. Jan 16 21:23:35.136802 kernel: audit: type=1101 audit(1768598614.997:847): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.136888 kernel: audit: type=1103 audit(1768598615.006:848): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.137616 kernel: audit: type=1006 audit(1768598615.006:849): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 21:23:35.006000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98888bc0 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:35.172337 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 21:23:35.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:35.294230 kernel: audit: type=1300 audit(1768598615.006:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc98888bc0 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:35.294362 kernel: audit: type=1327 audit(1768598615.006:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:35.294409 kernel: audit: type=1105 audit(1768598615.206:850): pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.206000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.416228 kernel: audit: type=1103 audit(1768598615.218:851): pid=5537 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.218000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.936000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.948053 sshd[5537]: Connection closed by 10.0.0.1 port 56696 Jan 16 21:23:35.937293 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:35.969573 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:56696.service: Deactivated successfully. Jan 16 21:23:36.032041 kernel: audit: type=1106 audit(1768598615.936:852): pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:36.032284 kernel: audit: type=1104 audit(1768598615.940:853): pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:35.940000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:36.028045 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 21:23:36.048339 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Jan 16 21:23:36.062517 systemd-logind[1575]: Removed session 21. Jan 16 21:23:35.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.59:22-10.0.0.1:56696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:36.453900 kubelet[2829]: E0116 21:23:36.447611 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:23:38.452867 kubelet[2829]: E0116 21:23:38.448897 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:41.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:41.032847 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:56712.service - OpenSSH per-connection server daemon (10.0.0.1:56712). Jan 16 21:23:41.065930 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:41.066216 kernel: audit: type=1130 audit(1768598621.031:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:41.798000 audit[5551]: USER_ACCT pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:41.836295 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 56712 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:41.879962 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:41.936290 kernel: audit: type=1101 audit(1768598621.798:856): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:41.850000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.080737 kernel: audit: type=1103 audit(1768598621.850:857): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.080839 kernel: audit: type=1006 audit(1768598621.850:858): pid=5551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 21:23:42.073621 systemd-logind[1575]: New session 22 of user core. Jan 16 21:23:41.850000 audit[5551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee714a760 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:42.126931 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 21:23:42.215890 kernel: audit: type=1300 audit(1768598621.850:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee714a760 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:41.850000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:42.272499 kernel: audit: type=1327 audit(1768598621.850:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:42.272616 kernel: audit: type=1105 audit(1768598622.188:859): pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.188000 audit[5551]: USER_START pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.199000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.451061 kubelet[2829]: E0116 21:23:42.444239 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:23:42.475870 kernel: audit: type=1103 audit(1768598622.199:860): pid=5561 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:42.481870 containerd[1596]: time="2026-01-16T21:23:42.480061464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:23:42.772997 containerd[1596]: time="2026-01-16T21:23:42.771375039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:42.784269 containerd[1596]: time="2026-01-16T21:23:42.783994068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:23:42.786846 containerd[1596]: time="2026-01-16T21:23:42.784496205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:42.788449 kubelet[2829]: E0116 21:23:42.788401 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:23:42.791034 kubelet[2829]: E0116 21:23:42.788571 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:23:42.791034 kubelet[2829]: E0116 21:23:42.788836 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77ptz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-x2ltk_calico-apiserver(cf888ed5-265d-4b90-8b8f-76579a07e031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:42.795583 kubelet[2829]: E0116 21:23:42.795549 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:23:43.483271 kubelet[2829]: E0116 21:23:43.482903 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:43.500658 containerd[1596]: time="2026-01-16T21:23:43.499336073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 21:23:43.673061 containerd[1596]: time="2026-01-16T21:23:43.669215197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:43.693563 sshd[5561]: Connection closed by 10.0.0.1 port 56712 Jan 16 21:23:43.701658 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:43.708489 containerd[1596]: time="2026-01-16T21:23:43.708431471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 21:23:43.720072 containerd[1596]: time="2026-01-16T21:23:43.720040235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:43.728546 kubelet[2829]: E0116 21:23:43.727962 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:23:43.741317 kubelet[2829]: E0116 21:23:43.740271 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 21:23:43.741317 kubelet[2829]: E0116 21:23:43.740489 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:43.883290 kernel: audit: type=1106 audit(1768598623.740:861): pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:43.740000 audit[5551]: USER_END pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:43.883505 containerd[1596]: time="2026-01-16T21:23:43.805621509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 21:23:43.745000 audit[5551]: CRED_DISP pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:43.931571 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:56712.service: Deactivated successfully. Jan 16 21:23:43.967633 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 21:23:43.980282 kernel: audit: type=1104 audit(1768598623.745:862): pid=5551 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:43.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.59:22-10.0.0.1:56712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:44.063256 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Jan 16 21:23:44.086037 systemd-logind[1575]: Removed session 22. Jan 16 21:23:44.206351 containerd[1596]: time="2026-01-16T21:23:44.199813258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:44.227373 containerd[1596]: time="2026-01-16T21:23:44.215872102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 21:23:44.227373 containerd[1596]: time="2026-01-16T21:23:44.218857213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:44.227873 kubelet[2829]: E0116 21:23:44.227760 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:23:44.227970 kubelet[2829]: E0116 21:23:44.227891 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 21:23:44.233549 kubelet[2829]: E0116 21:23:44.228028 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gx62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4hncm_calico-system(8c8c0e82-b18e-4cf2-bc74-ab0296b892f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:44.236179 kubelet[2829]: E0116 21:23:44.233932 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:23:44.491344 containerd[1596]: time="2026-01-16T21:23:44.485384957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 21:23:44.852935 containerd[1596]: time="2026-01-16T21:23:44.847808316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:44.876958 containerd[1596]: time="2026-01-16T21:23:44.875459236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 21:23:44.876958 containerd[1596]: time="2026-01-16T21:23:44.875553923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:44.884608 kubelet[2829]: E0116 21:23:44.875656 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:23:44.884608 kubelet[2829]: E0116 21:23:44.883527 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 21:23:44.901473 kubelet[2829]: E0116 21:23:44.886327 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zkb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j7hqz_calico-system(044f9539-8858-49e2-8876-e2c650ad8d77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:44.901473 kubelet[2829]: E0116 21:23:44.894359 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:23:44.911509 containerd[1596]: time="2026-01-16T21:23:44.892667707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 21:23:45.081580 containerd[1596]: time="2026-01-16T21:23:45.078953676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:45.118609 containerd[1596]: time="2026-01-16T21:23:45.097504521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:45.118609 containerd[1596]: time="2026-01-16T21:23:45.090070714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 21:23:45.123046 kubelet[2829]: E0116 21:23:45.121294 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:23:45.127804 kubelet[2829]: E0116 21:23:45.127766 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 21:23:45.139458 kubelet[2829]: E0116 21:23:45.139394 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4a36d8bfc8d44428963d068adc3adb01,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:45.187780 containerd[1596]: time="2026-01-16T21:23:45.173653700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 21:23:45.406025 containerd[1596]: time="2026-01-16T21:23:45.384339862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:45.494959 containerd[1596]: time="2026-01-16T21:23:45.493295917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 21:23:45.494959 containerd[1596]: time="2026-01-16T21:23:45.493408737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:45.505284 kubelet[2829]: E0116 21:23:45.505242 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:23:45.505431 kubelet[2829]: E0116 21:23:45.505407 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 21:23:45.506610 kubelet[2829]: E0116 21:23:45.505647 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65bbd7c669-5jcq4_calico-system(1ffcbae4-3231-47a7-b3a3-9a78e5206e0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:45.560394 kubelet[2829]: E0116 21:23:45.559954 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:23:48.744989 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Jan 16 21:23:48.808065 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:48.808341 kernel: audit: type=1130 audit(1768598628.744:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:58028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:48.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:58028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:49.403000 audit[5606]: USER_ACCT pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.416467 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:49.428990 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:49.499228 containerd[1596]: time="2026-01-16T21:23:49.496537801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 21:23:49.516206 systemd-logind[1575]: New session 23 of user core. Jan 16 21:23:49.415000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.554329 kernel: audit: type=1101 audit(1768598629.403:865): pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.554415 kernel: audit: type=1103 audit(1768598629.415:866): pid=5606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.733636 kernel: audit: type=1006 audit(1768598629.423:867): pid=5606 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 21:23:49.739417 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 21:23:49.743309 containerd[1596]: time="2026-01-16T21:23:49.741372231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:49.773845 containerd[1596]: time="2026-01-16T21:23:49.773554588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 21:23:49.773845 containerd[1596]: time="2026-01-16T21:23:49.773663902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:49.775895 kubelet[2829]: E0116 21:23:49.775534 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:23:49.775895 kubelet[2829]: E0116 21:23:49.775585 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 21:23:49.776621 kubelet[2829]: E0116 21:23:49.776035 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czw4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f68b6d698-6gdmk_calico-apiserver(484b15e8-2e9e-4270-8a9c-899b52ca1f08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:49.813580 kubelet[2829]: E0116 21:23:49.784625 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:23:49.838561 kernel: audit: type=1300 audit(1768598629.423:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92120d30 a2=3 a3=0 items=0 ppid=1 pid=5606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:49.423000 audit[5606]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92120d30 a2=3 a3=0 items=0 ppid=1 pid=5606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:49.971007 kernel: audit: type=1327 audit(1768598629.423:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:49.423000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:50.024002 kernel: audit: type=1105 audit(1768598629.841:868): pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.841000 audit[5606]: USER_START pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:49.881000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.255427 kernel: audit: type=1103 audit(1768598629.881:869): pid=5610 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.743944 sshd[5610]: Connection closed by 10.0.0.1 port 58028 Jan 16 21:23:50.747436 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:50.769000 audit[5606]: USER_END pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.799409 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:58028.service: Deactivated successfully. Jan 16 21:23:50.813027 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 21:23:50.853051 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Jan 16 21:23:50.870622 kernel: audit: type=1106 audit(1768598630.769:870): pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.870805 systemd-logind[1575]: Removed session 23. Jan 16 21:23:50.769000 audit[5606]: CRED_DISP pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.948343 kernel: audit: type=1104 audit(1768598630.769:871): pid=5606 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:50.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.59:22-10.0.0.1:58028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:54.453619 kubelet[2829]: E0116 21:23:54.452356 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:23:55.473512 containerd[1596]: time="2026-01-16T21:23:55.473469594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 21:23:55.474063 kubelet[2829]: E0116 21:23:55.473889 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:23:55.604597 containerd[1596]: time="2026-01-16T21:23:55.602312986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 21:23:55.617604 containerd[1596]: time="2026-01-16T21:23:55.617317779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 21:23:55.617604 containerd[1596]: time="2026-01-16T21:23:55.617420892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 21:23:55.620342 kubelet[2829]: E0116 21:23:55.618506 2829 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:23:55.620342 kubelet[2829]: E0116 21:23:55.618553 2829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 21:23:55.620342 kubelet[2829]: E0116 21:23:55.618681 2829 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw7zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66dd98b47c-2sbfh_calico-system(fe95499a-0c2a-421c-aaa9-9ead2566d247): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 21:23:55.621046 kubelet[2829]: E0116 21:23:55.620997 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:23:55.808475 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:38364.service - OpenSSH per-connection server daemon (10.0.0.1:38364). Jan 16 21:23:55.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:38364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:55.822888 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:23:55.822953 kernel: audit: type=1130 audit(1768598635.807:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:38364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:56.252000 audit[5637]: USER_ACCT pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.296503 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:56.308455 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 38364 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:56.331466 kernel: audit: type=1101 audit(1768598636.252:874): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.326536 systemd-logind[1575]: New session 24 of user core. Jan 16 21:23:56.283000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.339488 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 21:23:56.383806 kernel: audit: type=1103 audit(1768598636.283:875): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.383886 kernel: audit: type=1006 audit(1768598636.285:876): pid=5637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 16 21:23:56.442571 kernel: audit: type=1300 audit(1768598636.285:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5bf93e20 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:56.285000 audit[5637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5bf93e20 a2=3 a3=0 items=0 ppid=1 pid=5637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:56.461552 kernel: audit: type=1327 audit(1768598636.285:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:56.285000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:56.512968 kernel: audit: type=1105 audit(1768598636.369:877): pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.369000 audit[5637]: USER_START pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.385000 audit[5641]: CRED_ACQ pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:56.550258 kernel: audit: type=1103 audit(1768598636.385:878): pid=5641 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.023547 sshd[5641]: Connection closed by 10.0.0.1 port 38364 Jan 16 21:23:57.027000 audit[5637]: USER_END pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.025336 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:57.141595 kernel: audit: type=1106 audit(1768598637.027:879): pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.027000 audit[5637]: CRED_DISP pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.183310 kernel: audit: type=1104 audit(1768598637.027:880): pid=5637 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.189981 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:38364.service: Deactivated successfully. Jan 16 21:23:57.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.59:22-10.0.0.1:38364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:57.197049 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 21:23:57.201706 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Jan 16 21:23:57.204525 systemd-logind[1575]: Removed session 24. Jan 16 21:23:57.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.59:22-10.0.0.1:38392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:23:57.227687 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:38392.service - OpenSSH per-connection server daemon (10.0.0.1:38392). Jan 16 21:23:57.547844 kubelet[2829]: E0116 21:23:57.547695 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:23:57.589000 audit[5656]: USER_ACCT pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.602001 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 38392 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:23:57.598000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.604000 audit[5656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0577570 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:23:57.604000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:23:57.618499 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:23:57.694050 systemd-logind[1575]: New session 25 of user core. Jan 16 21:23:57.719449 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 21:23:57.742000 audit[5656]: USER_START pid=5656 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:57.763000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:58.449935 kubelet[2829]: E0116 21:23:58.449830 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:23:59.913256 sshd[5660]: Connection closed by 10.0.0.1 port 38392 Jan 16 21:23:59.915614 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Jan 16 21:23:59.926000 audit[5656]: USER_END pid=5656 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.928000 audit[5656]: CRED_DISP pid=5656 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:23:59.971651 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:38392.service: Deactivated successfully. Jan 16 21:23:59.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.59:22-10.0.0.1:38392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:00.012857 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 21:24:00.018887 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Jan 16 21:24:00.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.59:22-10.0.0.1:38442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:00.042311 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:38442.service - OpenSSH per-connection server daemon (10.0.0.1:38442). Jan 16 21:24:00.057378 systemd-logind[1575]: Removed session 25. Jan 16 21:24:00.467816 kubelet[2829]: E0116 21:24:00.467675 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:24:00.489986 kubelet[2829]: E0116 21:24:00.488028 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:24:00.524000 audit[5679]: USER_ACCT pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.528529 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 38442 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:00.548000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.551000 audit[5679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4e286710 a2=3 a3=0 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:00.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:00.562837 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:00.608465 systemd-logind[1575]: New session 26 of user core. Jan 16 21:24:00.618399 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 21:24:00.644000 audit[5679]: USER_START pid=5679 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:00.653000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:01.447069 kubelet[2829]: E0116 21:24:01.445690 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:01.457600 kubelet[2829]: E0116 21:24:01.453651 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:01.457600 kubelet[2829]: E0116 21:24:01.454486 2829 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 16 21:24:03.902898 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 16 21:24:03.903322 kernel: audit: type=1325 audit(1768598643.832:897): table=filter:143 family=2 entries=26 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:03.832000 audit[5699]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:03.903470 sshd[5683]: Connection closed by 10.0.0.1 port 38442 Jan 16 21:24:03.869326 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:03.832000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffda26dda80 a2=0 a3=7ffda26dda6c items=0 ppid=2988 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:03.998253 kernel: audit: type=1300 audit(1768598643.832:897): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffda26dda80 a2=0 a3=7ffda26dda6c items=0 ppid=2988 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.027261 kernel: audit: type=1327 audit(1768598643.832:897): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:03.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:04.022483 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:38442.service: Deactivated successfully. Jan 16 21:24:04.032468 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 21:24:04.044865 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Jan 16 21:24:03.898000 audit[5679]: USER_END pid=5679 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.077994 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:38346.service - OpenSSH per-connection server daemon (10.0.0.1:38346). Jan 16 21:24:04.085666 systemd-logind[1575]: Removed session 26. Jan 16 21:24:04.155713 kernel: audit: type=1106 audit(1768598643.898:898): pid=5679 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:03.898000 audit[5679]: CRED_DISP pid=5679 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.217593 kernel: audit: type=1104 audit(1768598643.898:899): pid=5679 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.59:22-10.0.0.1:38442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:04.034000 audit[5699]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:04.316314 kernel: audit: type=1131 audit(1768598644.022:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.59:22-10.0.0.1:38442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:04.316397 kernel: audit: type=1325 audit(1768598644.034:901): table=nat:144 family=2 entries=20 op=nft_register_rule pid=5699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:04.034000 audit[5699]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda26dda80 a2=0 a3=0 items=0 ppid=2988 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.405293 kernel: audit: type=1300 audit(1768598644.034:901): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda26dda80 a2=0 a3=0 items=0 ppid=2988 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.405422 kernel: audit: type=1327 audit(1768598644.034:901): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:04.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:04.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.59:22-10.0.0.1:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:04.486265 kernel: audit: type=1130 audit(1768598644.077:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.59:22-10.0.0.1:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:04.318000 audit[5706]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:04.318000 audit[5706]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff0192f1e0 a2=0 a3=7fff0192f1cc items=0 ppid=2988 pid=5706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:04.451000 audit[5706]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:04.451000 audit[5706]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff0192f1e0 a2=0 a3=0 items=0 ppid=2988 pid=5706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:04.781000 audit[5704]: USER_ACCT pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.783430 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 38346 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:04.796000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:04.796000 audit[5704]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb3212160 a2=3 a3=0 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:04.796000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:04.800351 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:04.893870 systemd-logind[1575]: New session 27 of user core. Jan 16 21:24:04.940427 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 21:24:04.983000 audit[5704]: USER_START pid=5704 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:05.006000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:05.470056 kubelet[2829]: E0116 21:24:05.469455 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:24:06.599367 sshd[5710]: Connection closed by 10.0.0.1 port 38346 Jan 16 21:24:06.601692 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:06.623000 audit[5704]: USER_END pid=5704 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:06.631000 audit[5704]: CRED_DISP pid=5704 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:06.631554 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). Jan 16 21:24:06.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.59:22-10.0.0.1:38350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:06.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.59:22-10.0.0.1:38346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:06.648352 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:38346.service: Deactivated successfully. Jan 16 21:24:06.669494 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 21:24:06.681903 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Jan 16 21:24:06.685955 systemd-logind[1575]: Removed session 27. Jan 16 21:24:06.966000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:06.972392 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:06.978000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:06.978000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2c5c9370 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:06.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:06.983449 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:07.021851 systemd-logind[1575]: New session 28 of user core. Jan 16 21:24:07.057375 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 21:24:07.114000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:07.140000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:07.487296 kubelet[2829]: E0116 21:24:07.476328 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:24:07.813536 sshd[5725]: Connection closed by 10.0.0.1 port 38350 Jan 16 21:24:07.813565 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:07.814000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:07.814000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:07.855303 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:38350.service: Deactivated successfully. Jan 16 21:24:07.857669 systemd-logind[1575]: Session 28 logged out. Waiting for processes to exit. Jan 16 21:24:07.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.59:22-10.0.0.1:38350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:07.868981 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 21:24:07.885718 systemd-logind[1575]: Removed session 28. Jan 16 21:24:09.450651 kubelet[2829]: E0116 21:24:09.450299 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:24:11.451043 kubelet[2829]: E0116 21:24:11.450321 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:24:12.446543 kubelet[2829]: E0116 21:24:12.446403 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:24:12.876580 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:46744.service - OpenSSH per-connection server daemon (10.0.0.1:46744). Jan 16 21:24:12.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:46744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:12.962467 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 16 21:24:12.962921 kernel: audit: type=1130 audit(1768598652.876:922): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:46744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:13.341000 audit[5740]: USER_ACCT pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.349003 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 46744 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:13.373016 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:13.419293 systemd-logind[1575]: New session 29 of user core. Jan 16 21:24:13.603051 kernel: audit: type=1101 audit(1768598653.341:923): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.618249 kernel: audit: type=1103 audit(1768598653.364:924): pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.618309 kernel: audit: type=1006 audit(1768598653.364:925): pid=5740 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 16 21:24:13.618344 kernel: audit: type=1300 audit(1768598653.364:925): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5d87e0c0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:13.364000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.364000 audit[5740]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5d87e0c0 a2=3 a3=0 items=0 ppid=1 pid=5740 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:13.608325 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 16 21:24:13.364000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:13.860857 kernel: audit: type=1327 audit(1768598653.364:925): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:13.861006 kernel: audit: type=1105 audit(1768598653.694:926): pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.694000 audit[5740]: USER_START pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.889740 kernel: audit: type=1103 audit(1768598653.733:927): pid=5755 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:13.733000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:14.443468 kubelet[2829]: E0116 21:24:14.443273 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:24:14.626294 sshd[5755]: Connection closed by 10.0.0.1 port 46744 Jan 16 21:24:14.625601 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:14.645000 audit[5740]: USER_END pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:14.670274 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:46744.service: Deactivated successfully. Jan 16 21:24:14.673297 systemd-logind[1575]: Session 29 logged out. Waiting for processes to exit. Jan 16 21:24:14.677899 systemd[1]: session-29.scope: Deactivated successfully. Jan 16 21:24:14.684456 systemd-logind[1575]: Removed session 29. Jan 16 21:24:14.766916 kernel: audit: type=1106 audit(1768598654.645:928): pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:14.767067 kernel: audit: type=1104 audit(1768598654.646:929): pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:14.646000 audit[5740]: CRED_DISP pid=5740 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:14.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.59:22-10.0.0.1:46744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:18.452249 kubelet[2829]: E0116 21:24:18.451922 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:24:19.654654 systemd[1]: Started sshd@28-10.0.0.59:22-10.0.0.1:46790.service - OpenSSH per-connection server daemon (10.0.0.1:46790). Jan 16 21:24:19.667349 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:24:19.667455 kernel: audit: type=1130 audit(1768598659.653:931): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:46790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:19.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:46790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:19.873448 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 46790 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:19.871000 audit[5784]: USER_ACCT pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:19.884923 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:19.917425 systemd-logind[1575]: New session 30 of user core. Jan 16 21:24:19.878000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:19.985037 kernel: audit: type=1101 audit(1768598659.871:932): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:19.985323 kernel: audit: type=1103 audit(1768598659.878:933): pid=5784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:19.878000 audit[5784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25059df0 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:20.103647 kernel: audit: type=1006 audit(1768598659.878:934): pid=5784 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 16 21:24:20.103766 kernel: audit: type=1300 audit(1768598659.878:934): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25059df0 a2=3 a3=0 items=0 ppid=1 pid=5784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:20.103914 kernel: audit: type=1327 audit(1768598659.878:934): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:19.878000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:20.102429 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 16 21:24:20.125000 audit[5784]: USER_START pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.200258 kernel: audit: type=1105 audit(1768598660.125:935): pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.139000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.264360 kernel: audit: type=1103 audit(1768598660.139:936): pid=5788 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.484337 sshd[5788]: Connection closed by 10.0.0.1 port 46790 Jan 16 21:24:20.484412 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:20.488000 audit[5784]: USER_END pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.494992 systemd[1]: sshd@28-10.0.0.59:22-10.0.0.1:46790.service: Deactivated successfully. Jan 16 21:24:20.504950 systemd[1]: session-30.scope: Deactivated successfully. Jan 16 21:24:20.511058 systemd-logind[1575]: Session 30 logged out. Waiting for processes to exit. Jan 16 21:24:20.512641 systemd-logind[1575]: Removed session 30. Jan 16 21:24:20.537393 kernel: audit: type=1106 audit(1768598660.488:937): pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.488000 audit[5784]: CRED_DISP pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.626493 kernel: audit: type=1104 audit(1768598660.488:938): pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:20.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.59:22-10.0.0.1:46790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:22.458418 kubelet[2829]: E0116 21:24:22.457459 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:24:22.458418 kubelet[2829]: E0116 21:24:22.457558 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:24:23.453513 kubelet[2829]: E0116 21:24:23.453404 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:24:24.574000 audit[5801]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:24.574000 audit[5801]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff81e6be50 a2=0 a3=7fff81e6be3c items=0 ppid=2988 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:24.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:24.606000 audit[5801]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5801 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 21:24:24.606000 audit[5801]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff81e6be50 a2=0 a3=7fff81e6be3c items=0 ppid=2988 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:24.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 21:24:25.534685 kubelet[2829]: E0116 21:24:25.534543 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6" Jan 16 21:24:25.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:25.568419 systemd[1]: Started sshd@29-10.0.0.59:22-10.0.0.1:57480.service - OpenSSH per-connection server daemon (10.0.0.1:57480). Jan 16 21:24:25.587310 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 21:24:25.587405 kernel: audit: type=1130 audit(1768598665.567:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:25.967000 audit[5803]: USER_ACCT pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:25.977666 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 57480 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:26.000419 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:26.081048 kernel: audit: type=1101 audit(1768598665.967:943): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:26.083384 kernel: audit: type=1103 audit(1768598665.989:944): pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:25.989000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:26.073059 systemd-logind[1575]: New session 31 of user core. Jan 16 21:24:26.119958 kernel: audit: type=1006 audit(1768598665.990:945): pid=5803 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 16 21:24:25.990000 audit[5803]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc85efa0 a2=3 a3=0 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:26.180702 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 16 21:24:26.350486 kernel: audit: type=1300 audit(1768598665.990:945): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedc85efa0 a2=3 a3=0 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:26.350615 kernel: audit: type=1327 audit(1768598665.990:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:25.990000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:26.223000 audit[5803]: USER_START pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:26.448439 kernel: audit: type=1105 audit(1768598666.223:946): pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:26.455374 kernel: audit: type=1103 audit(1768598666.248:947): pid=5809 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:26.248000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:27.084723 sshd[5809]: Connection closed by 10.0.0.1 port 57480 Jan 16 21:24:27.088770 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:27.097000 audit[5803]: USER_END pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:27.106211 systemd[1]: sshd@29-10.0.0.59:22-10.0.0.1:57480.service: Deactivated successfully. Jan 16 21:24:27.113754 systemd[1]: session-31.scope: Deactivated successfully. Jan 16 21:24:27.126259 systemd-logind[1575]: Session 31 logged out. Waiting for processes to exit. Jan 16 21:24:27.128804 systemd-logind[1575]: Removed session 31. Jan 16 21:24:27.162694 kernel: audit: type=1106 audit(1768598667.097:948): pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:27.162920 kernel: audit: type=1104 audit(1768598667.098:949): pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:27.098000 audit[5803]: CRED_DISP pid=5803 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:27.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.59:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:28.467810 kubelet[2829]: E0116 21:24:28.467471 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-6gdmk" podUID="484b15e8-2e9e-4270-8a9c-899b52ca1f08" Jan 16 21:24:30.446243 kubelet[2829]: E0116 21:24:30.445725 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f68b6d698-x2ltk" podUID="cf888ed5-265d-4b90-8b8f-76579a07e031" Jan 16 21:24:32.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:57510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:32.106815 systemd[1]: Started sshd@30-10.0.0.59:22-10.0.0.1:57510.service - OpenSSH per-connection server daemon (10.0.0.1:57510). Jan 16 21:24:32.123378 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:24:32.123516 kernel: audit: type=1130 audit(1768598672.106:951): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:57510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:32.519063 sshd[5823]: Accepted publickey for core from 10.0.0.1 port 57510 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:32.517000 audit[5823]: USER_ACCT pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.535341 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:32.560318 systemd-logind[1575]: New session 32 of user core. Jan 16 21:24:32.587301 kernel: audit: type=1101 audit(1768598672.517:952): pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.528000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.639357 kernel: audit: type=1103 audit(1768598672.528:953): pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.639565 kernel: audit: type=1006 audit(1768598672.528:954): pid=5823 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 16 21:24:32.528000 audit[5823]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9d77c130 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:32.694304 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 16 21:24:32.801929 kernel: audit: type=1300 audit(1768598672.528:954): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9d77c130 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:32.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:32.705000 audit[5823]: USER_START pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.850392 kernel: audit: type=1327 audit(1768598672.528:954): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:32.850520 kernel: audit: type=1105 audit(1768598672.705:955): pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.850575 kernel: audit: type=1103 audit(1768598672.717:956): pid=5828 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:32.717000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:33.204552 sshd[5828]: Connection closed by 10.0.0.1 port 57510 Jan 16 21:24:33.206417 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:33.212000 audit[5823]: USER_END pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:33.218593 systemd-logind[1575]: Session 32 logged out. Waiting for processes to exit. Jan 16 21:24:33.264292 kernel: audit: type=1106 audit(1768598673.212:957): pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:33.264351 kernel: audit: type=1104 audit(1768598673.212:958): pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:33.212000 audit[5823]: CRED_DISP pid=5823 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:33.224488 systemd[1]: sshd@30-10.0.0.59:22-10.0.0.1:57510.service: Deactivated successfully. Jan 16 21:24:33.244223 systemd[1]: session-32.scope: Deactivated successfully. Jan 16 21:24:33.257066 systemd-logind[1575]: Removed session 32. Jan 16 21:24:33.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.59:22-10.0.0.1:57510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:35.444483 kubelet[2829]: E0116 21:24:35.444382 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66dd98b47c-2sbfh" podUID="fe95499a-0c2a-421c-aaa9-9ead2566d247" Jan 16 21:24:36.444818 kubelet[2829]: E0116 21:24:36.444724 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j7hqz" podUID="044f9539-8858-49e2-8876-e2c650ad8d77" Jan 16 21:24:37.449203 kubelet[2829]: E0116 21:24:37.447831 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65bbd7c669-5jcq4" podUID="1ffcbae4-3231-47a7-b3a3-9a78e5206e0e" Jan 16 21:24:38.239815 systemd[1]: Started sshd@31-10.0.0.59:22-10.0.0.1:41470.service - OpenSSH per-connection server daemon (10.0.0.1:41470). Jan 16 21:24:38.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:41470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:38.246739 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 21:24:38.246806 kernel: audit: type=1130 audit(1768598678.239:960): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:41470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:38.488000 audit[5841]: USER_ACCT pid=5841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.496300 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 41470 ssh2: RSA SHA256:/bkobahYfSCqQu7uYu8LD3UfAl7Bej4v2xqJfx/8URA Jan 16 21:24:38.503267 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 21:24:38.535796 kernel: audit: type=1101 audit(1768598678.488:961): pid=5841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.540436 kernel: audit: type=1103 audit(1768598678.494:962): pid=5841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.494000 audit[5841]: CRED_ACQ pid=5841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.535438 systemd-logind[1575]: New session 33 of user core. Jan 16 21:24:38.541708 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 16 21:24:38.588059 kernel: audit: type=1006 audit(1768598678.494:963): pid=5841 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 16 21:24:38.494000 audit[5841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff28f2fbe0 a2=3 a3=0 items=0 ppid=1 pid=5841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:38.629216 kernel: audit: type=1300 audit(1768598678.494:963): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff28f2fbe0 a2=3 a3=0 items=0 ppid=1 pid=5841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 21:24:38.494000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:38.647289 kernel: audit: type=1327 audit(1768598678.494:963): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 21:24:38.647392 kernel: audit: type=1105 audit(1768598678.563:964): pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.563000 audit[5841]: USER_START pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.580000 audit[5845]: CRED_ACQ pid=5845 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.718015 kernel: audit: type=1103 audit(1768598678.580:965): pid=5845 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.815819 sshd[5845]: Connection closed by 10.0.0.1 port 41470 Jan 16 21:24:38.817609 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Jan 16 21:24:38.820000 audit[5841]: USER_END pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.832655 systemd[1]: sshd@31-10.0.0.59:22-10.0.0.1:41470.service: Deactivated successfully. Jan 16 21:24:38.837759 systemd[1]: session-33.scope: Deactivated successfully. Jan 16 21:24:38.847838 systemd-logind[1575]: Session 33 logged out. Waiting for processes to exit. Jan 16 21:24:38.852234 kernel: audit: type=1106 audit(1768598678.820:966): pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.852298 kernel: audit: type=1104 audit(1768598678.821:967): pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.821000 audit[5841]: CRED_DISP pid=5841 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 16 21:24:38.850994 systemd-logind[1575]: Removed session 33. Jan 16 21:24:38.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.59:22-10.0.0.1:41470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 21:24:40.453804 kubelet[2829]: E0116 21:24:40.452044 2829 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4hncm" podUID="8c8c0e82-b18e-4cf2-bc74-ab0296b892f6"