Jan 28 06:14:43.464404 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 28 04:05:06 -00 2026 Jan 28 06:14:43.464438 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:14:43.464452 kernel: BIOS-provided physical RAM map: Jan 28 06:14:43.464466 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 06:14:43.464476 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 06:14:43.464486 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 06:14:43.464498 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 06:14:43.464508 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 06:14:43.464518 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 06:14:43.464528 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 06:14:43.464538 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 28 06:14:43.464551 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 06:14:43.464561 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 06:14:43.464572 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 06:14:43.464584 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 06:14:43.464595 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 06:14:43.464608 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 06:14:43.464619 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 06:14:43.464630 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 06:14:43.464641 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 06:14:43.464652 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 06:14:43.464663 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 06:14:43.464674 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 06:14:43.464685 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 06:14:43.464696 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 06:14:43.464708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 06:14:43.464722 kernel: NX (Execute Disable) protection: active Jan 28 06:14:43.464733 kernel: APIC: Static calls initialized Jan 28 06:14:43.464743 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 28 06:14:43.464755 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 28 06:14:43.464766 kernel: extended physical RAM map: Jan 28 06:14:43.464777 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 06:14:43.464788 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 06:14:43.464799 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 06:14:43.464810 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 06:14:43.464821 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 06:14:43.464832 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 06:14:43.464848 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 06:14:43.464858 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 28 06:14:43.464869 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 28 06:14:43.464884 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 28 06:14:43.464898 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 28 06:14:43.464909 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 28 06:14:43.464921 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 06:14:43.465099 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 06:14:43.465110 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 06:14:43.465122 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 06:14:43.465133 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 06:14:43.465144 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 06:14:43.465249 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 06:14:43.465265 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 06:14:43.465277 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 06:14:43.465288 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 06:14:43.465299 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 06:14:43.465311 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 06:14:43.465322 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 06:14:43.465333 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 06:14:43.465344 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 06:14:43.465355 kernel: efi: EFI v2.7 by EDK II Jan 28 06:14:43.465366 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 28 06:14:43.465377 kernel: random: crng init done Jan 28 06:14:43.465391 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 28 06:14:43.465402 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 28 06:14:43.465413 kernel: secureboot: Secure boot disabled Jan 28 06:14:43.465424 kernel: SMBIOS 2.8 present. Jan 28 06:14:43.465435 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 28 06:14:43.465446 kernel: DMI: Memory slots populated: 1/1 Jan 28 06:14:43.465457 kernel: Hypervisor detected: KVM Jan 28 06:14:43.465468 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 06:14:43.465479 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 06:14:43.465490 kernel: kvm-clock: using sched offset of 27418523765 cycles Jan 28 06:14:43.465502 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 06:14:43.465517 kernel: tsc: Detected 2445.426 MHz processor Jan 28 06:14:43.465529 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 06:14:43.465541 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 06:14:43.465552 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 06:14:43.465564 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 06:14:43.465576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 06:14:43.465588 kernel: Using GB pages for direct mapping Jan 28 06:14:43.465602 kernel: ACPI: Early table checksum verification disabled Jan 28 06:14:43.465614 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 06:14:43.465625 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 06:14:43.465637 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465649 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465661 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 06:14:43.465673 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465687 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465699 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465711 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 06:14:43.465723 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 06:14:43.465735 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 06:14:43.465747 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 06:14:43.465759 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 06:14:43.465774 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 06:14:43.465786 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 06:14:43.465797 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 06:14:43.465809 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 06:14:43.465820 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 06:14:43.465831 kernel: No NUMA configuration found Jan 28 06:14:43.465843 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 28 06:14:43.465856 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 28 06:14:43.465870 kernel: Zone ranges: Jan 28 06:14:43.465882 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 06:14:43.465894 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 28 06:14:43.465905 kernel: Normal empty Jan 28 06:14:43.465918 kernel: Device empty Jan 28 06:14:43.466092 kernel: Movable zone start for each node Jan 28 06:14:43.466105 kernel: Early memory node ranges Jan 28 06:14:43.466114 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 06:14:43.466129 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 06:14:43.466140 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 06:14:43.466236 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 28 06:14:43.466252 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 28 06:14:43.466264 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 28 06:14:43.466276 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 28 06:14:43.466288 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 28 06:14:43.466303 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 28 06:14:43.466315 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 06:14:43.466337 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 06:14:43.466353 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 06:14:43.466365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 06:14:43.466378 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 28 06:14:43.466390 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 28 06:14:43.466402 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 28 06:14:43.466414 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 28 06:14:43.466427 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 28 06:14:43.466442 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 06:14:43.466454 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 06:14:43.466467 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 06:14:43.466481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 06:14:43.466497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 06:14:43.466507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 06:14:43.466518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 06:14:43.466529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 06:14:43.466540 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 06:14:43.466552 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 06:14:43.466564 kernel: TSC deadline timer available Jan 28 06:14:43.466581 kernel: CPU topo: Max. logical packages: 1 Jan 28 06:14:43.466593 kernel: CPU topo: Max. logical dies: 1 Jan 28 06:14:43.466606 kernel: CPU topo: Max. dies per package: 1 Jan 28 06:14:43.466618 kernel: CPU topo: Max. threads per core: 1 Jan 28 06:14:43.466631 kernel: CPU topo: Num. cores per package: 4 Jan 28 06:14:43.466644 kernel: CPU topo: Num. threads per package: 4 Jan 28 06:14:43.466656 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 06:14:43.466672 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 06:14:43.466684 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 06:14:43.466696 kernel: kvm-guest: setup PV sched yield Jan 28 06:14:43.466709 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 28 06:14:43.466722 kernel: Booting paravirtualized kernel on KVM Jan 28 06:14:43.466735 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 06:14:43.466748 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 06:14:43.466765 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 06:14:43.466777 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 06:14:43.466789 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 06:14:43.466802 kernel: kvm-guest: PV spinlocks enabled Jan 28 06:14:43.466815 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 06:14:43.466832 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:14:43.466847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 06:14:43.466862 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 06:14:43.466873 kernel: Fallback order for Node 0: 0 Jan 28 06:14:43.466883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 28 06:14:43.466894 kernel: Policy zone: DMA32 Jan 28 06:14:43.466907 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 06:14:43.466920 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 06:14:43.467091 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 06:14:43.467107 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 06:14:43.467118 kernel: Dynamic Preempt: voluntary Jan 28 06:14:43.467129 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 06:14:43.467141 kernel: rcu: RCU event tracing is enabled. Jan 28 06:14:43.467243 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 06:14:43.467257 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 06:14:43.467268 kernel: Rude variant of Tasks RCU enabled. Jan 28 06:14:43.467284 kernel: Tracing variant of Tasks RCU enabled. Jan 28 06:14:43.467295 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 06:14:43.467307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 06:14:43.467318 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 06:14:43.467329 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 06:14:43.467341 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 06:14:43.467354 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 06:14:43.467365 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 06:14:43.467379 kernel: Console: colour dummy device 80x25 Jan 28 06:14:43.467390 kernel: printk: legacy console [ttyS0] enabled Jan 28 06:14:43.467400 kernel: ACPI: Core revision 20240827 Jan 28 06:14:43.467411 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 06:14:43.467423 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 06:14:43.467434 kernel: x2apic enabled Jan 28 06:14:43.467445 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 06:14:43.467460 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 06:14:43.467472 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 06:14:43.467483 kernel: kvm-guest: setup PV IPIs Jan 28 06:14:43.467495 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 06:14:43.467506 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 06:14:43.467518 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 06:14:43.467529 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 06:14:43.467543 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 06:14:43.467555 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 06:14:43.467567 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 06:14:43.467578 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 06:14:43.467589 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 06:14:43.467600 kernel: Speculative Store Bypass: Vulnerable Jan 28 06:14:43.467612 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 06:14:43.467627 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 06:14:43.467638 kernel: active return thunk: srso_alias_return_thunk Jan 28 06:14:43.467650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 06:14:43.467662 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 06:14:43.467673 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 06:14:43.467684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 06:14:43.467697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 06:14:43.467712 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 06:14:43.467723 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 06:14:43.467733 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 06:14:43.467744 kernel: Freeing SMP alternatives memory: 32K Jan 28 06:14:43.467755 kernel: pid_max: default: 32768 minimum: 301 Jan 28 06:14:43.467766 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 06:14:43.467777 kernel: landlock: Up and running. Jan 28 06:14:43.467792 kernel: SELinux: Initializing. Jan 28 06:14:43.467804 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 06:14:43.467815 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 06:14:43.467827 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 06:14:43.467838 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 06:14:43.467849 kernel: signal: max sigframe size: 1776 Jan 28 06:14:43.467861 kernel: rcu: Hierarchical SRCU implementation. Jan 28 06:14:43.467875 kernel: rcu: Max phase no-delay instances is 400. Jan 28 06:14:43.467887 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 06:14:43.467898 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 06:14:43.467910 kernel: smp: Bringing up secondary CPUs ... Jan 28 06:14:43.467921 kernel: smpboot: x86: Booting SMP configuration: Jan 28 06:14:43.468076 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 06:14:43.468088 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 06:14:43.468103 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 06:14:43.468115 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Jan 28 06:14:43.468126 kernel: devtmpfs: initialized Jan 28 06:14:43.468138 kernel: x86/mm: Memory block size: 128MB Jan 28 06:14:43.468150 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 06:14:43.468248 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 06:14:43.468260 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 28 06:14:43.468276 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 06:14:43.468287 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 28 06:14:43.468299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 06:14:43.468310 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 06:14:43.468322 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 06:14:43.468333 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 06:14:43.468345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 06:14:43.468359 kernel: audit: initializing netlink subsys (disabled) Jan 28 06:14:43.468370 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 06:14:43.468381 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 06:14:43.468393 kernel: audit: type=2000 audit(1769580866.810:1): state=initialized audit_enabled=0 res=1 Jan 28 06:14:43.468404 kernel: cpuidle: using governor menu Jan 28 06:14:43.468415 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 06:14:43.468427 kernel: dca service started, version 1.12.1 Jan 28 06:14:43.468441 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 28 06:14:43.468452 kernel: PCI: Using configuration type 1 for base access Jan 28 06:14:43.468463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 06:14:43.468475 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 06:14:43.468486 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 06:14:43.468498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 06:14:43.468509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 06:14:43.468523 kernel: ACPI: Added _OSI(Module Device) Jan 28 06:14:43.468534 kernel: ACPI: Added _OSI(Processor Device) Jan 28 06:14:43.468546 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 06:14:43.468557 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 06:14:43.468568 kernel: ACPI: Interpreter enabled Jan 28 06:14:43.468579 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 06:14:43.468591 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 06:14:43.468605 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 06:14:43.468616 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 06:14:43.468627 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 06:14:43.468639 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 06:14:43.469270 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 06:14:43.469508 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 06:14:43.469735 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 06:14:43.469750 kernel: PCI host bridge to bus 0000:00 Jan 28 06:14:43.470135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 06:14:43.470446 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 06:14:43.470657 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 06:14:43.471885 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 28 06:14:43.472406 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 28 06:14:43.472634 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 28 06:14:43.472859 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 06:14:43.473451 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 06:14:43.473706 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 06:14:43.474322 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 28 06:14:43.474558 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 28 06:14:43.474786 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 28 06:14:43.475870 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 06:14:43.477602 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 28320 usecs Jan 28 06:14:43.480381 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 06:14:43.481518 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 28 06:14:43.482243 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 28 06:14:43.482499 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 28 06:14:43.482756 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 06:14:43.483265 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 28 06:14:43.483621 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 28 06:14:43.483874 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 28 06:14:43.484382 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 06:14:43.484633 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 28 06:14:43.484879 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 28 06:14:43.485378 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 28 06:14:43.485632 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 28 06:14:43.485885 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 06:14:43.486441 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 06:14:43.486684 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 27343 usecs Jan 28 06:14:43.487108 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 06:14:43.487453 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 28 06:14:43.487703 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 28 06:14:43.488277 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 06:14:43.488523 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 28 06:14:43.488543 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 06:14:43.488555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 06:14:43.488567 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 06:14:43.488584 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 06:14:43.488597 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 06:14:43.488610 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 06:14:43.488624 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 06:14:43.488635 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 06:14:43.488646 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 06:14:43.488656 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 06:14:43.488672 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 06:14:43.488686 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 06:14:43.488698 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 06:14:43.488711 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 06:14:43.488722 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 06:14:43.488732 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 06:14:43.488743 kernel: iommu: Default domain type: Translated Jan 28 06:14:43.488761 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 06:14:43.488774 kernel: efivars: Registered efivars operations Jan 28 06:14:43.488788 kernel: PCI: Using ACPI for IRQ routing Jan 28 06:14:43.488799 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 06:14:43.488810 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 06:14:43.488820 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 28 06:14:43.488831 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 28 06:14:43.488848 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 28 06:14:43.488861 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 28 06:14:43.488873 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 28 06:14:43.488884 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 28 06:14:43.488894 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 28 06:14:43.489461 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 06:14:43.489704 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 06:14:43.490129 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 06:14:43.490148 kernel: vgaarb: loaded Jan 28 06:14:43.490262 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 06:14:43.490276 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 06:14:43.490290 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 06:14:43.490301 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 06:14:43.490312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 06:14:43.490328 kernel: pnp: PnP ACPI init Jan 28 06:14:43.490593 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 28 06:14:43.490611 kernel: pnp: PnP ACPI: found 6 devices Jan 28 06:14:43.490624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 06:14:43.490637 kernel: NET: Registered PF_INET protocol family Jan 28 06:14:43.490650 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 06:14:43.490669 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 06:14:43.490698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 06:14:43.490714 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 06:14:43.490727 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 06:14:43.490741 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 06:14:43.490754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 06:14:43.490765 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 06:14:43.490780 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 06:14:43.490791 kernel: NET: Registered PF_XDP protocol family Jan 28 06:14:43.491290 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 28 06:14:43.491536 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 28 06:14:43.491760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 06:14:43.492248 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 06:14:43.492480 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 06:14:43.492712 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 28 06:14:43.493093 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 28 06:14:43.493409 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 28 06:14:43.493430 kernel: PCI: CLS 0 bytes, default 64 Jan 28 06:14:43.493446 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 06:14:43.493458 kernel: Initialise system trusted keyrings Jan 28 06:14:43.493474 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 06:14:43.493486 kernel: Key type asymmetric registered Jan 28 06:14:43.493498 kernel: Asymmetric key parser 'x509' registered Jan 28 06:14:43.493512 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 06:14:43.493525 kernel: io scheduler mq-deadline registered Jan 28 06:14:43.493538 kernel: io scheduler kyber registered Jan 28 06:14:43.493549 kernel: io scheduler bfq registered Jan 28 06:14:43.493560 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 06:14:43.493577 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 06:14:43.493594 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 06:14:43.493607 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 06:14:43.493621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 06:14:43.493636 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 06:14:43.493647 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 06:14:43.493658 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 06:14:43.493671 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 06:14:43.494103 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 06:14:43.494128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 06:14:43.494466 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 06:14:43.494714 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T06:14:39 UTC (1769580879) Jan 28 06:14:43.495096 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 28 06:14:43.495115 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 06:14:43.495126 kernel: efifb: probing for efifb Jan 28 06:14:43.495138 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 28 06:14:43.495230 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 28 06:14:43.495251 kernel: efifb: scrolling: redraw Jan 28 06:14:43.495263 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 06:14:43.495278 kernel: Console: switching to colour frame buffer device 160x50 Jan 28 06:14:43.495289 kernel: fb0: EFI VGA frame buffer device Jan 28 06:14:43.495303 kernel: pstore: Using crash dump compression: deflate Jan 28 06:14:43.495315 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 06:14:43.495326 kernel: NET: Registered PF_INET6 protocol family Jan 28 06:14:43.495337 kernel: Segment Routing with IPv6 Jan 28 06:14:43.495353 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 06:14:43.495367 kernel: NET: Registered PF_PACKET protocol family Jan 28 06:14:43.495378 kernel: Key type dns_resolver registered Jan 28 06:14:43.495389 kernel: IPI shorthand broadcast: enabled Jan 28 06:14:43.495400 kernel: sched_clock: Marking stable (6423064298, 5529849761)->(14182103773, -2229189714) Jan 28 06:14:43.495413 kernel: registered taskstats version 1 Jan 28 06:14:43.495425 kernel: Loading compiled-in X.509 certificates Jan 28 06:14:43.495441 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 60cf4c6c8cc6ec3eb800b1f9cf1d8cc38776b17f' Jan 28 06:14:43.495451 kernel: Demotion targets for Node 0: null Jan 28 06:14:43.495463 kernel: Key type .fscrypt registered Jan 28 06:14:43.495541 kernel: Key type fscrypt-provisioning registered Jan 28 06:14:43.495554 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 06:14:43.495565 kernel: ima: Allocated hash algorithm: sha1 Jan 28 06:14:43.495576 kernel: ima: No architecture policies found Jan 28 06:14:43.495592 kernel: clk: Disabling unused clocks Jan 28 06:14:43.495605 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 28 06:14:43.495617 kernel: Write protecting the kernel read-only data: 47104k Jan 28 06:14:43.495627 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 06:14:43.495638 kernel: Run /init as init process Jan 28 06:14:43.495651 kernel: with arguments: Jan 28 06:14:43.495666 kernel: /init Jan 28 06:14:43.495681 kernel: with environment: Jan 28 06:14:43.495691 kernel: HOME=/ Jan 28 06:14:43.495702 kernel: TERM=linux Jan 28 06:14:43.495715 kernel: SCSI subsystem initialized Jan 28 06:14:43.495728 kernel: libata version 3.00 loaded. Jan 28 06:14:43.496576 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 06:14:43.496596 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 06:14:43.496912 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 06:14:43.497413 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 06:14:43.497662 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 06:14:43.498105 kernel: scsi host0: ahci Jan 28 06:14:43.498467 kernel: scsi host1: ahci Jan 28 06:14:43.498736 kernel: scsi host2: ahci Jan 28 06:14:43.499263 kernel: scsi host3: ahci Jan 28 06:14:43.499531 kernel: scsi host4: ahci Jan 28 06:14:43.499793 kernel: scsi host5: ahci Jan 28 06:14:43.499813 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 28 06:14:43.499825 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 28 06:14:43.499837 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 28 06:14:43.499857 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 28 06:14:43.499871 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 28 06:14:43.499886 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 28 06:14:43.499897 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 06:14:43.499909 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 06:14:43.499920 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 06:14:43.500098 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 06:14:43.500117 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 06:14:43.500131 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 06:14:43.500145 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 06:14:43.500240 kernel: ata3.00: applying bridge limits Jan 28 06:14:43.500253 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 06:14:43.500264 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 06:14:43.500279 kernel: ata3.00: configured for UDMA/100 Jan 28 06:14:43.500572 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 06:14:43.500836 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 06:14:43.501331 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 06:14:43.501354 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 06:14:43.501622 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 06:14:43.501648 kernel: GPT:16515071 != 27000831 Jan 28 06:14:43.501663 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 06:14:43.501675 kernel: GPT:16515071 != 27000831 Jan 28 06:14:43.501685 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 06:14:43.501696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 06:14:43.501707 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 06:14:43.502131 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 06:14:43.502257 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 06:14:43.502273 kernel: device-mapper: uevent: version 1.0.3 Jan 28 06:14:43.502288 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 06:14:43.502300 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 06:14:43.502311 kernel: raid6: avx2x4 gen() 18811 MB/s Jan 28 06:14:43.502322 kernel: raid6: avx2x2 gen() 30232 MB/s Jan 28 06:14:43.502333 kernel: raid6: avx2x1 gen() 24310 MB/s Jan 28 06:14:43.502351 kernel: raid6: using algorithm avx2x2 gen() 30232 MB/s Jan 28 06:14:43.502364 kernel: raid6: .... xor() 25774 MB/s, rmw enabled Jan 28 06:14:43.502378 kernel: raid6: using avx2x2 recovery algorithm Jan 28 06:14:43.502390 kernel: xor: automatically using best checksumming function avx Jan 28 06:14:43.502402 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 06:14:43.502413 kernel: BTRFS: device fsid d4cc183a-4a92-40c5-bcbb-0af9ab626d3e devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (180) Jan 28 06:14:43.502425 kernel: BTRFS info (device dm-0): first mount of filesystem d4cc183a-4a92-40c5-bcbb-0af9ab626d3e Jan 28 06:14:43.502443 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:14:43.502458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 06:14:43.502469 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 06:14:43.502480 kernel: loop: module loaded Jan 28 06:14:43.502491 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 06:14:43.502502 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 06:14:43.502519 systemd[1]: Successfully made /usr/ read-only. Jan 28 06:14:43.502538 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:14:43.502550 systemd[1]: Detected virtualization kvm. Jan 28 06:14:43.502561 systemd[1]: Detected architecture x86-64. Jan 28 06:14:43.502575 systemd[1]: Running in initrd. Jan 28 06:14:43.502589 systemd[1]: No hostname configured, using default hostname. Jan 28 06:14:43.502601 systemd[1]: Hostname set to . Jan 28 06:14:43.502616 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 06:14:43.502629 systemd[1]: Queued start job for default target initrd.target. Jan 28 06:14:43.502644 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 06:14:43.502657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:14:43.502672 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:14:43.502685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 06:14:43.502697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:14:43.502714 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 06:14:43.502729 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 06:14:43.502744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:14:43.502757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:14:43.502769 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:14:43.502784 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:14:43.502796 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:14:43.502811 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:14:43.502826 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:14:43.502839 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:14:43.502851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:14:43.502864 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:14:43.502881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 06:14:43.502895 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 06:14:43.502909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:14:43.503065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:14:43.503085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:14:43.503101 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:14:43.503113 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 06:14:43.503129 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 06:14:43.503145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:14:43.503248 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 06:14:43.503263 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 06:14:43.503278 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 06:14:43.503290 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:14:43.503302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:14:43.503319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:14:43.503332 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 06:14:43.503347 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:14:43.503366 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 06:14:43.503379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 06:14:43.503549 systemd-journald[320]: Collecting audit messages is enabled. Jan 28 06:14:43.503593 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:14:43.503608 systemd-journald[320]: Journal started Jan 28 06:14:43.503630 systemd-journald[320]: Runtime Journal (/run/log/journal/7f6bce0e2b98423094a452cf49db8776) is 6M, max 48M, 42M free. Jan 28 06:14:43.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.535241 kernel: audit: type=1130 audit(1769580883.514:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.535293 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:14:43.575494 kernel: audit: type=1130 audit(1769580883.543:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.553458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:14:43.589355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:14:43.631091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 06:14:43.633291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:14:43.669124 kernel: audit: type=1130 audit(1769580883.637:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.674235 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 06:14:43.699351 kernel: Bridge firewalling registered Jan 28 06:14:43.679547 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 28 06:14:43.736230 kernel: audit: type=1130 audit(1769580883.709:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.772813 kernel: audit: type=1130 audit(1769580883.736:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.688089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:14:43.730722 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 06:14:43.735893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:14:43.821489 kernel: audit: type=1130 audit(1769580883.781:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.749372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:14:43.774243 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:14:43.828816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:14:43.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.883733 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 06:14:43.912333 kernel: audit: type=1130 audit(1769580883.872:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.921690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:14:43.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.965289 kernel: audit: type=1130 audit(1769580883.943:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:43.986000 audit: BPF prog-id=6 op=LOAD Jan 28 06:14:43.994086 kernel: audit: type=1334 audit(1769580883.986:10): prog-id=6 op=LOAD Jan 28 06:14:43.995312 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:14:44.052288 dracut-cmdline[354]: dracut-109 Jan 28 06:14:44.067679 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ede6474d93f89ce5b937430958316ce45b515ef3bd53609be944197fc2bc9aa6 Jan 28 06:14:44.241801 systemd-resolved[357]: Positive Trust Anchors: Jan 28 06:14:44.241894 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:14:44.241900 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 06:14:44.242092 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:14:44.371638 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 28 06:14:44.374044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:14:44.424332 kernel: audit: type=1130 audit(1769580884.389:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:44.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:44.391099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:14:44.543272 kernel: Loading iSCSI transport class v2.0-870. Jan 28 06:14:44.573390 kernel: iscsi: registered transport (tcp) Jan 28 06:14:44.616341 kernel: iscsi: registered transport (qla4xxx) Jan 28 06:14:44.616458 kernel: QLogic iSCSI HBA Driver Jan 28 06:14:44.797907 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:14:44.861880 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:14:44.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:44.893701 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:14:45.018121 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 06:14:45.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.030904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 06:14:45.050856 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 06:14:45.141893 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:14:45.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.149000 audit: BPF prog-id=7 op=LOAD Jan 28 06:14:45.151000 audit: BPF prog-id=8 op=LOAD Jan 28 06:14:45.152275 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:14:45.220298 systemd-udevd[582]: Using default interface naming scheme 'v257'. Jan 28 06:14:45.248824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:14:45.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.268169 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 06:14:45.373587 dracut-pre-trigger[624]: rd.md=0: removing MD RAID activation Jan 28 06:14:45.539349 kernel: hrtimer: interrupt took 5959764 ns Jan 28 06:14:45.557840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:14:45.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.594278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:14:45.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.604000 audit: BPF prog-id=9 op=LOAD Jan 28 06:14:45.622655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:14:45.643263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:14:45.817114 systemd-networkd[726]: lo: Link UP Jan 28 06:14:45.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.817282 systemd-networkd[726]: lo: Gained carrier Jan 28 06:14:45.822920 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:14:45.830617 systemd[1]: Reached target network.target - Network. Jan 28 06:14:45.904315 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:14:45.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:45.947917 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 06:14:46.268907 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 06:14:46.314123 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 06:14:46.350516 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 06:14:46.394092 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 06:14:46.401656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:14:46.437173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 06:14:46.465537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:14:46.481073 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 28 06:14:46.468426 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:14:46.504422 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:14:46.524682 kernel: AES CTR mode by8 optimization enabled Jan 28 06:14:46.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:46.542587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:14:46.594868 disk-uuid[822]: Primary Header is updated. Jan 28 06:14:46.594868 disk-uuid[822]: Secondary Entries is updated. Jan 28 06:14:46.594868 disk-uuid[822]: Secondary Header is updated. Jan 28 06:14:46.676720 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:14:46.676735 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:14:46.678513 systemd-networkd[726]: eth0: Link UP Jan 28 06:14:46.688385 systemd-networkd[726]: eth0: Gained carrier Jan 28 06:14:46.688403 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:14:46.770855 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 06:14:46.820310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:14:46.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:46.943814 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 06:14:46.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:46.954562 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:14:46.965917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:14:46.986751 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:14:47.015902 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 06:14:47.152903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:14:47.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:47.749703 systemd-networkd[726]: eth0: Gained IPv6LL Jan 28 06:14:47.758175 disk-uuid[830]: Warning: The kernel is still using the old partition table. Jan 28 06:14:47.758175 disk-uuid[830]: The new table will be used at the next reboot or after you Jan 28 06:14:47.758175 disk-uuid[830]: run partprobe(8) or kpartx(8) Jan 28 06:14:47.758175 disk-uuid[830]: The operation has completed successfully. Jan 28 06:14:47.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:47.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:47.802547 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 06:14:47.802779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 06:14:47.807798 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 06:14:47.930358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Jan 28 06:14:47.930425 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:14:47.943393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:14:47.970481 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:14:47.970569 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:14:48.011159 kernel: BTRFS info (device vda6): last unmount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:14:48.017845 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 06:14:48.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:48.039417 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 06:14:49.697275 ignition[884]: Ignition 2.24.0 Jan 28 06:14:49.697289 ignition[884]: Stage: fetch-offline Jan 28 06:14:49.698049 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:14:49.698072 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:14:49.699201 ignition[884]: parsed url from cmdline: "" Jan 28 06:14:49.699206 ignition[884]: no config URL provided Jan 28 06:14:49.699845 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 06:14:49.699861 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 28 06:14:49.700187 ignition[884]: op(1): [started] loading QEMU firmware config module Jan 28 06:14:49.700193 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 06:14:49.838660 ignition[884]: op(1): [finished] loading QEMU firmware config module Jan 28 06:14:50.599525 ignition[884]: parsing config with SHA512: 1526b9eb95e9ee2af337dc40c6c246556135ce1adad76470b375349c4598b15bd05117055c55c3900680f38bd7467222d4d5a94f54f971f3a9c8a7fb91a62830 Jan 28 06:14:50.664065 unknown[884]: fetched base config from "system" Jan 28 06:14:50.664145 unknown[884]: fetched user config from "qemu" Jan 28 06:14:50.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:50.667635 ignition[884]: fetch-offline: fetch-offline passed Jan 28 06:14:50.725698 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 28 06:14:50.725732 kernel: audit: type=1130 audit(1769580890.690:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:50.672789 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:14:50.668196 ignition[884]: Ignition finished successfully Jan 28 06:14:50.690753 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 06:14:50.692753 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 06:14:50.912667 ignition[894]: Ignition 2.24.0 Jan 28 06:14:50.912760 ignition[894]: Stage: kargs Jan 28 06:14:50.913448 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:14:50.913460 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:14:50.927478 ignition[894]: kargs: kargs passed Jan 28 06:14:50.928083 ignition[894]: Ignition finished successfully Jan 28 06:14:50.949400 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 06:14:50.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:50.964433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 06:14:50.992715 kernel: audit: type=1130 audit(1769580890.960:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:51.805421 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1118607403 wd_nsec: 1118606893 Jan 28 06:14:51.844133 ignition[901]: Ignition 2.24.0 Jan 28 06:14:51.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:51.850381 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 06:14:51.893775 kernel: audit: type=1130 audit(1769580891.861:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:51.844300 ignition[901]: Stage: disks Jan 28 06:14:51.862408 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 06:14:51.844673 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 28 06:14:51.904309 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 06:14:51.844684 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:14:51.923634 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:14:51.846772 ignition[901]: disks: disks passed Jan 28 06:14:51.923803 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:14:51.846821 ignition[901]: Ignition finished successfully Jan 28 06:14:51.945897 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:14:51.962438 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 06:14:52.074558 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 06:14:52.092503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 06:14:52.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:52.105135 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 06:14:52.134302 kernel: audit: type=1130 audit(1769580892.101:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:53.137112 kernel: EXT4-fs (vda9): mounted filesystem 07ff5302-22ec-4ed8-8e90-e96c5bc64457 r/w with ordered data mode. Quota mode: none. Jan 28 06:14:53.139339 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 06:14:53.146238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 06:14:53.162396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:14:53.193471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 06:14:53.205634 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 06:14:53.231516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 28 06:14:53.205751 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 06:14:53.277156 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:14:53.277384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:14:53.277401 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:14:53.205782 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:14:53.301334 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:14:53.214205 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 06:14:53.245341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 06:14:53.308472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:14:53.956576 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 06:14:54.010894 kernel: audit: type=1130 audit(1769580893.969:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:53.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:53.982831 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 06:14:54.001909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 06:14:54.072148 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 06:14:54.098498 kernel: BTRFS info (device vda6): last unmount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:14:54.134785 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 06:14:54.172475 kernel: audit: type=1130 audit(1769580894.135:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:54.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:54.388772 ignition[1020]: INFO : Ignition 2.24.0 Jan 28 06:14:54.388772 ignition[1020]: INFO : Stage: mount Jan 28 06:14:54.405676 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:14:54.405676 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:14:54.421736 ignition[1020]: INFO : mount: mount passed Jan 28 06:14:54.428063 ignition[1020]: INFO : Ignition finished successfully Jan 28 06:14:54.439735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 06:14:54.470088 kernel: audit: type=1130 audit(1769580894.445:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:54.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:14:54.470710 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 06:14:54.538545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 06:14:54.604256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1032) Jan 28 06:14:54.617889 kernel: BTRFS info (device vda6): first mount of filesystem 8b0f8b2b-c413-4474-b428-022c461f0c86 Jan 28 06:14:54.618172 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 06:14:54.639764 kernel: BTRFS info (device vda6): turning on async discard Jan 28 06:14:54.641446 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 06:14:54.644511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 06:14:55.069904 ignition[1049]: INFO : Ignition 2.24.0 Jan 28 06:14:55.069904 ignition[1049]: INFO : Stage: files Jan 28 06:14:55.097117 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:14:55.097117 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:14:55.113166 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jan 28 06:14:55.122847 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 06:14:55.122847 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 06:14:55.154917 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 06:14:55.164520 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 06:14:55.164520 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 06:14:55.164520 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 06:14:55.164520 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 06:14:55.157877 unknown[1049]: wrote ssh authorized keys file for user: core Jan 28 06:14:55.296137 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 06:14:56.273885 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 06:14:56.273885 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 06:14:56.316247 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 06:14:57.130404 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 06:15:04.088886 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 06:15:04.088886 ignition[1049]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 06:15:04.127650 ignition[1049]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 06:15:04.263730 ignition[1049]: INFO : files: files passed Jan 28 06:15:04.263730 ignition[1049]: INFO : Ignition finished successfully Jan 28 06:15:04.417453 kernel: audit: type=1130 audit(1769580904.279:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.263808 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 06:15:04.289521 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 06:15:04.414299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 06:15:04.455451 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 06:15:04.478174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 06:15:04.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.513824 initrd-setup-root-after-ignition[1080]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 06:15:04.540633 kernel: audit: type=1130 audit(1769580904.499:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.540672 kernel: audit: type=1131 audit(1769580904.499:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.531816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:15:04.588759 kernel: audit: type=1130 audit(1769580904.557:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.588847 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:15:04.588847 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:15:04.558906 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 06:15:04.648824 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 06:15:04.603708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 06:15:04.754586 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 06:15:04.754911 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 06:15:04.819472 kernel: audit: type=1130 audit(1769580904.762:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.819505 kernel: audit: type=1131 audit(1769580904.762:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.764760 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 06:15:04.828729 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 06:15:04.847293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 06:15:04.849564 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 06:15:04.937612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:15:04.974179 kernel: audit: type=1130 audit(1769580904.946:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:04.949628 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 06:15:05.028556 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 06:15:05.028825 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:15:05.049445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:15:05.074738 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 06:15:05.075269 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 06:15:05.124724 kernel: audit: type=1131 audit(1769580905.086:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.075467 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 06:15:05.125186 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 06:15:05.136189 systemd[1]: Stopped target basic.target - Basic System. Jan 28 06:15:05.149314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 06:15:05.161869 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 06:15:05.174814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 06:15:05.191582 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 06:15:05.211220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 06:15:05.230252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 06:15:05.249909 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 06:15:05.269268 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 06:15:05.304362 systemd[1]: Stopped target swap.target - Swaps. Jan 28 06:15:05.323335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 06:15:05.368753 kernel: audit: type=1131 audit(1769580905.331:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.323572 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 06:15:05.368913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:15:05.389917 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:15:05.422312 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 06:15:05.432238 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:15:05.432665 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 06:15:05.487510 kernel: audit: type=1131 audit(1769580905.455:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.432811 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 06:15:05.487748 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 06:15:05.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.488312 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 06:15:05.497280 systemd[1]: Stopped target paths.target - Path Units. Jan 28 06:15:05.519871 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 06:15:05.520649 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:15:05.529238 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 06:15:05.555349 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 06:15:05.571301 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 06:15:05.571485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 06:15:05.583220 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 06:15:05.583344 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 06:15:05.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.597705 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 06:15:05.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.597825 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:15:05.603745 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 06:15:05.603860 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 06:15:05.617308 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 06:15:05.617627 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 06:15:05.639340 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 06:15:05.649555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 06:15:05.721133 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 06:15:05.721530 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:15:05.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.748788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 06:15:05.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.748901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:15:05.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.758675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 06:15:05.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.799659 ignition[1107]: INFO : Ignition 2.24.0 Jan 28 06:15:05.799659 ignition[1107]: INFO : Stage: umount Jan 28 06:15:05.799659 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 06:15:05.799659 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 06:15:05.799659 ignition[1107]: INFO : umount: umount passed Jan 28 06:15:05.799659 ignition[1107]: INFO : Ignition finished successfully Jan 28 06:15:05.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.758780 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 06:15:05.779102 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 06:15:05.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.779210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 06:15:05.793857 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 06:15:05.794688 systemd[1]: Stopped target network.target - Network. Jan 28 06:15:05.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.807185 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 06:15:05.807291 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 06:15:05.826638 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 06:15:05.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.827318 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 06:15:05.834794 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 06:15:05.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.834846 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 06:15:05.866225 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 06:15:06.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.010000 audit: BPF prog-id=6 op=UNLOAD Jan 28 06:15:05.866275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 06:15:06.026000 audit: BPF prog-id=9 op=UNLOAD Jan 28 06:15:05.873746 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 06:15:05.898525 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 06:15:06.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.907748 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 06:15:05.907913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 06:15:05.938173 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 06:15:06.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.938278 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 06:15:06.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.975559 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 06:15:06.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:05.975773 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 06:15:06.000230 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 06:15:06.000475 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 06:15:06.010184 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 06:15:06.026516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 06:15:06.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.026583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:15:06.047336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 06:15:06.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.047528 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 06:15:06.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.067743 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 06:15:06.082366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 06:15:06.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.082558 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 06:15:06.098567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 06:15:06.098621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:15:06.113812 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 06:15:06.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.113881 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 06:15:06.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.129155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:15:06.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.178168 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 06:15:06.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.178594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:15:06.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.204823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 06:15:06.204886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 06:15:06.213679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 06:15:06.213732 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:15:06.231121 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 06:15:06.231194 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 06:15:06.240171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 06:15:06.240226 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 06:15:06.264688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 06:15:06.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.264748 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 06:15:06.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:06.289224 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 06:15:06.305273 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 06:15:06.305341 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:15:06.320100 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 06:15:06.320179 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:15:06.335190 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 06:15:06.335242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:15:06.351157 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 06:15:06.351210 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:15:06.361795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 06:15:06.361847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:15:06.373476 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 06:15:06.437654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 06:15:06.444202 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 06:15:06.444361 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 06:15:06.471336 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 06:15:06.492694 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 06:15:06.544789 systemd[1]: Switching root. Jan 28 06:15:06.636701 systemd-journald[320]: Journal stopped Jan 28 06:15:09.187888 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 28 06:15:09.188099 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 06:15:09.188120 kernel: SELinux: policy capability open_perms=1 Jan 28 06:15:09.188220 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 06:15:09.188232 kernel: SELinux: policy capability always_check_network=0 Jan 28 06:15:09.188248 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 06:15:09.188259 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 06:15:09.188274 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 06:15:09.188285 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 06:15:09.188297 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 06:15:09.188383 systemd[1]: Successfully loaded SELinux policy in 132.488ms. Jan 28 06:15:09.188486 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.995ms. Jan 28 06:15:09.188501 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 06:15:09.188513 systemd[1]: Detected virtualization kvm. Jan 28 06:15:09.188524 systemd[1]: Detected architecture x86-64. Jan 28 06:15:09.188536 systemd[1]: Detected first boot. Jan 28 06:15:09.188551 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 06:15:09.188638 zram_generator::config[1153]: No configuration found. Jan 28 06:15:09.188653 kernel: Guest personality initialized and is inactive Jan 28 06:15:09.188665 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 06:15:09.188676 kernel: Initialized host personality Jan 28 06:15:09.188688 kernel: NET: Registered PF_VSOCK protocol family Jan 28 06:15:09.188699 systemd[1]: Populated /etc with preset unit settings. Jan 28 06:15:09.188712 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 06:15:09.188800 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 06:15:09.188813 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 06:15:09.188830 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 06:15:09.188841 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 06:15:09.188853 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 06:15:09.188865 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 06:15:09.189070 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 06:15:09.189085 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 06:15:09.189099 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 06:15:09.189110 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 06:15:09.189122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 06:15:09.189135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 06:15:09.189146 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 06:15:09.189233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 06:15:09.189252 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 06:15:09.189479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 06:15:09.189563 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 06:15:09.189646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 06:15:09.189659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 06:15:09.189670 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 06:15:09.189683 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 06:15:09.189695 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 06:15:09.189706 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 06:15:09.189787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 06:15:09.189801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 06:15:09.189812 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 06:15:09.189824 systemd[1]: Reached target slices.target - Slice Units. Jan 28 06:15:09.189837 systemd[1]: Reached target swap.target - Swaps. Jan 28 06:15:09.189849 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 06:15:09.189862 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 06:15:09.190071 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 06:15:09.190087 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 06:15:09.190099 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 06:15:09.190111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 06:15:09.190123 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 06:15:09.190135 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 06:15:09.190147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 06:15:09.190159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 06:15:09.190246 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 06:15:09.190259 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 06:15:09.190272 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 06:15:09.190284 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 06:15:09.190296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:15:09.190308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 06:15:09.190390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 06:15:09.190474 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 06:15:09.190488 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 06:15:09.190500 systemd[1]: Reached target machines.target - Containers. Jan 28 06:15:09.190512 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 06:15:09.190524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:15:09.190538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 06:15:09.190622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 06:15:09.190637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:15:09.190649 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:15:09.190661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:15:09.190738 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 06:15:09.190751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:15:09.190763 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 06:15:09.190844 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 06:15:09.190857 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 06:15:09.190869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 06:15:09.190881 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 06:15:09.190894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:15:09.191100 kernel: ACPI: bus type drm_connector registered Jan 28 06:15:09.191113 kernel: fuse: init (API version 7.41) Jan 28 06:15:09.191125 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 06:15:09.191137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 06:15:09.191149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 06:15:09.191161 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 06:15:09.191269 systemd-journald[1239]: Collecting audit messages is enabled. Jan 28 06:15:09.191291 systemd-journald[1239]: Journal started Jan 28 06:15:09.191314 systemd-journald[1239]: Runtime Journal (/run/log/journal/7f6bce0e2b98423094a452cf49db8776) is 6M, max 48M, 42M free. Jan 28 06:15:08.538000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 06:15:09.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.078000 audit: BPF prog-id=14 op=UNLOAD Jan 28 06:15:09.078000 audit: BPF prog-id=13 op=UNLOAD Jan 28 06:15:09.090000 audit: BPF prog-id=15 op=LOAD Jan 28 06:15:09.092000 audit: BPF prog-id=16 op=LOAD Jan 28 06:15:09.096000 audit: BPF prog-id=17 op=LOAD Jan 28 06:15:09.184000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 06:15:09.184000 audit[1239]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc84385fc0 a2=4000 a3=0 items=0 ppid=1 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:09.184000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 06:15:07.982164 systemd[1]: Queued start job for default target multi-user.target. Jan 28 06:15:08.005865 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 06:15:08.007194 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 06:15:08.007721 systemd[1]: systemd-journald.service: Consumed 2.719s CPU time. Jan 28 06:15:09.206066 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 06:15:09.227141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 06:15:09.250245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:15:09.260145 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 06:15:09.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.270884 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 06:15:09.279169 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 06:15:09.287501 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 06:15:09.294836 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 06:15:09.303277 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 06:15:09.312164 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 06:15:09.320635 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 06:15:09.352752 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 28 06:15:09.352792 kernel: audit: type=1130 audit(1769580909.329:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.331223 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 06:15:09.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.362721 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 06:15:09.363185 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 06:15:09.381069 kernel: audit: type=1130 audit(1769580909.361:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.390107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:15:09.390386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:15:09.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.425732 kernel: audit: type=1130 audit(1769580909.388:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.425769 kernel: audit: type=1131 audit(1769580909.388:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.434848 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:15:09.435749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:15:09.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.470367 kernel: audit: type=1130 audit(1769580909.433:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.470486 kernel: audit: type=1131 audit(1769580909.433:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.481583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:15:09.482077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:15:09.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.526795 kernel: audit: type=1130 audit(1769580909.479:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.526855 kernel: audit: type=1131 audit(1769580909.480:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.538576 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 06:15:09.539131 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 06:15:09.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.581777 kernel: audit: type=1130 audit(1769580909.536:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.581895 kernel: audit: type=1131 audit(1769580909.536:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.592262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:15:09.592698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:15:09.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.602204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 06:15:09.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.611633 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 06:15:09.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.622623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 06:15:09.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.633349 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 06:15:09.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.644905 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 06:15:09.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.671387 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 06:15:09.681740 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 06:15:09.694594 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 06:15:09.710159 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 06:15:09.720397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 06:15:09.720549 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 06:15:09.732185 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 06:15:09.744236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:15:09.744540 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:15:09.748752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 06:15:09.758706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 06:15:09.770295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:15:09.774290 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 06:15:09.784771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:15:09.789179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 06:15:09.801855 systemd-journald[1239]: Time spent on flushing to /var/log/journal/7f6bce0e2b98423094a452cf49db8776 is 45.091ms for 1201 entries. Jan 28 06:15:09.801855 systemd-journald[1239]: System Journal (/var/log/journal/7f6bce0e2b98423094a452cf49db8776) is 8M, max 163.5M, 155.5M free. Jan 28 06:15:09.879346 systemd-journald[1239]: Received client request to flush runtime journal. Jan 28 06:15:09.879507 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 06:15:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.802489 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 06:15:09.824184 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 06:15:09.836866 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 06:15:09.846775 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 06:15:09.862868 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 06:15:09.876669 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 06:15:09.896359 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 06:15:09.919262 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 06:15:09.930088 kernel: loop2: detected capacity change from 0 to 50784 Jan 28 06:15:09.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.944774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 06:15:09.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.963100 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 06:15:09.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.975167 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 28 06:15:09.975180 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 28 06:15:09.985385 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 06:15:09.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:09.999515 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 06:15:10.019171 kernel: loop3: detected capacity change from 0 to 224512 Jan 28 06:15:10.078165 kernel: loop4: detected capacity change from 0 to 111560 Jan 28 06:15:10.086349 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 06:15:10.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.100000 audit: BPF prog-id=18 op=LOAD Jan 28 06:15:10.100000 audit: BPF prog-id=19 op=LOAD Jan 28 06:15:10.100000 audit: BPF prog-id=20 op=LOAD Jan 28 06:15:10.102404 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 06:15:10.104189 kernel: loop5: detected capacity change from 0 to 50784 Jan 28 06:15:10.119000 audit: BPF prog-id=21 op=LOAD Jan 28 06:15:10.121216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 06:15:10.133252 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 06:15:10.142266 kernel: loop6: detected capacity change from 0 to 224512 Jan 28 06:15:10.150000 audit: BPF prog-id=22 op=LOAD Jan 28 06:15:10.152000 audit: BPF prog-id=23 op=LOAD Jan 28 06:15:10.152000 audit: BPF prog-id=24 op=LOAD Jan 28 06:15:10.153300 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 06:15:10.159839 (sd-merge)[1296]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 06:15:10.165508 (sd-merge)[1296]: Merged extensions into '/usr'. Jan 28 06:15:10.170000 audit: BPF prog-id=25 op=LOAD Jan 28 06:15:10.170000 audit: BPF prog-id=26 op=LOAD Jan 28 06:15:10.170000 audit: BPF prog-id=27 op=LOAD Jan 28 06:15:10.172354 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 06:15:10.187071 systemd[1]: Reload requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 06:15:10.187178 systemd[1]: Reloading... Jan 28 06:15:10.206160 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 28 06:15:10.206534 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 28 06:15:10.299102 zram_generator::config[1327]: No configuration found. Jan 28 06:15:10.299141 systemd-nsresourced[1301]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 06:15:10.399903 systemd-oomd[1298]: No swap; memory pressure usage will be degraded Jan 28 06:15:10.421562 systemd-resolved[1299]: Positive Trust Anchors: Jan 28 06:15:10.421660 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 06:15:10.421666 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 06:15:10.421693 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 06:15:10.441204 systemd-resolved[1299]: Defaulting to hostname 'linux'. Jan 28 06:15:10.591825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 06:15:10.592652 systemd[1]: Reloading finished in 404 ms. Jan 28 06:15:10.640378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 06:15:10.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.652109 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 06:15:10.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.663121 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 06:15:10.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.676887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 06:15:10.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.690389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 06:15:10.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.704800 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 06:15:10.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:10.720356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 06:15:10.753282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 06:15:10.780651 systemd[1]: Starting ensure-sysext.service... Jan 28 06:15:10.789606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 06:15:10.799000 audit: BPF prog-id=8 op=UNLOAD Jan 28 06:15:10.799000 audit: BPF prog-id=7 op=UNLOAD Jan 28 06:15:10.801000 audit: BPF prog-id=28 op=LOAD Jan 28 06:15:10.801000 audit: BPF prog-id=29 op=LOAD Jan 28 06:15:10.802799 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 06:15:10.813000 audit: BPF prog-id=30 op=LOAD Jan 28 06:15:10.813000 audit: BPF prog-id=22 op=UNLOAD Jan 28 06:15:10.814000 audit: BPF prog-id=31 op=LOAD Jan 28 06:15:10.814000 audit: BPF prog-id=32 op=LOAD Jan 28 06:15:10.814000 audit: BPF prog-id=23 op=UNLOAD Jan 28 06:15:10.814000 audit: BPF prog-id=24 op=UNLOAD Jan 28 06:15:10.816000 audit: BPF prog-id=33 op=LOAD Jan 28 06:15:10.816000 audit: BPF prog-id=21 op=UNLOAD Jan 28 06:15:10.818000 audit: BPF prog-id=34 op=LOAD Jan 28 06:15:10.818000 audit: BPF prog-id=15 op=UNLOAD Jan 28 06:15:10.818000 audit: BPF prog-id=35 op=LOAD Jan 28 06:15:10.818000 audit: BPF prog-id=36 op=LOAD Jan 28 06:15:10.818000 audit: BPF prog-id=16 op=UNLOAD Jan 28 06:15:10.818000 audit: BPF prog-id=17 op=UNLOAD Jan 28 06:15:10.819000 audit: BPF prog-id=37 op=LOAD Jan 28 06:15:10.819000 audit: BPF prog-id=25 op=UNLOAD Jan 28 06:15:10.819000 audit: BPF prog-id=38 op=LOAD Jan 28 06:15:10.819000 audit: BPF prog-id=39 op=LOAD Jan 28 06:15:10.819000 audit: BPF prog-id=26 op=UNLOAD Jan 28 06:15:10.819000 audit: BPF prog-id=27 op=UNLOAD Jan 28 06:15:10.821000 audit: BPF prog-id=40 op=LOAD Jan 28 06:15:10.822000 audit: BPF prog-id=18 op=UNLOAD Jan 28 06:15:10.822000 audit: BPF prog-id=41 op=LOAD Jan 28 06:15:10.822000 audit: BPF prog-id=42 op=LOAD Jan 28 06:15:10.822000 audit: BPF prog-id=19 op=UNLOAD Jan 28 06:15:10.822000 audit: BPF prog-id=20 op=UNLOAD Jan 28 06:15:10.829857 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 06:15:10.830200 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 06:15:10.830547 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 06:15:10.830914 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jan 28 06:15:10.831139 systemd[1]: Reloading... Jan 28 06:15:10.832098 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 28 06:15:10.832178 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jan 28 06:15:10.845641 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:15:10.845662 systemd-tmpfiles[1383]: Skipping /boot Jan 28 06:15:10.862172 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 06:15:10.862187 systemd-tmpfiles[1383]: Skipping /boot Jan 28 06:15:10.867860 systemd-udevd[1384]: Using default interface naming scheme 'v257'. Jan 28 06:15:10.946150 zram_generator::config[1421]: No configuration found. Jan 28 06:15:11.092333 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 06:15:11.139396 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 06:15:11.150683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 06:15:11.150718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 06:15:11.166232 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 06:15:11.200095 kernel: ACPI: button: Power Button [PWRF] Jan 28 06:15:11.291681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 06:15:11.304701 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 06:15:11.304858 systemd[1]: Reloading finished in 473 ms. Jan 28 06:15:11.318252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 06:15:11.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:12.052052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 06:15:12.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:12.076000 audit: BPF prog-id=43 op=LOAD Jan 28 06:15:12.076000 audit: BPF prog-id=30 op=UNLOAD Jan 28 06:15:12.076000 audit: BPF prog-id=44 op=LOAD Jan 28 06:15:12.076000 audit: BPF prog-id=45 op=LOAD Jan 28 06:15:12.076000 audit: BPF prog-id=31 op=UNLOAD Jan 28 06:15:12.076000 audit: BPF prog-id=32 op=UNLOAD Jan 28 06:15:12.080000 audit: BPF prog-id=46 op=LOAD Jan 28 06:15:12.080000 audit: BPF prog-id=34 op=UNLOAD Jan 28 06:15:12.080000 audit: BPF prog-id=47 op=LOAD Jan 28 06:15:12.080000 audit: BPF prog-id=48 op=LOAD Jan 28 06:15:12.080000 audit: BPF prog-id=35 op=UNLOAD Jan 28 06:15:12.080000 audit: BPF prog-id=36 op=UNLOAD Jan 28 06:15:12.082000 audit: BPF prog-id=49 op=LOAD Jan 28 06:15:12.082000 audit: BPF prog-id=37 op=UNLOAD Jan 28 06:15:12.082000 audit: BPF prog-id=50 op=LOAD Jan 28 06:15:12.083000 audit: BPF prog-id=51 op=LOAD Jan 28 06:15:12.083000 audit: BPF prog-id=38 op=UNLOAD Jan 28 06:15:12.083000 audit: BPF prog-id=39 op=UNLOAD Jan 28 06:15:12.086000 audit: BPF prog-id=52 op=LOAD Jan 28 06:15:12.086000 audit: BPF prog-id=33 op=UNLOAD Jan 28 06:15:12.087000 audit: BPF prog-id=53 op=LOAD Jan 28 06:15:12.087000 audit: BPF prog-id=54 op=LOAD Jan 28 06:15:12.087000 audit: BPF prog-id=28 op=UNLOAD Jan 28 06:15:12.087000 audit: BPF prog-id=29 op=UNLOAD Jan 28 06:15:12.089000 audit: BPF prog-id=55 op=LOAD Jan 28 06:15:12.089000 audit: BPF prog-id=40 op=UNLOAD Jan 28 06:15:12.089000 audit: BPF prog-id=56 op=LOAD Jan 28 06:15:12.089000 audit: BPF prog-id=57 op=LOAD Jan 28 06:15:12.089000 audit: BPF prog-id=41 op=UNLOAD Jan 28 06:15:12.089000 audit: BPF prog-id=42 op=UNLOAD Jan 28 06:15:12.147569 systemd[1]: Finished ensure-sysext.service. Jan 28 06:15:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:12.207693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:15:12.210412 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:15:12.221386 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 06:15:12.233568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 06:15:12.244355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 06:15:12.260283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 06:15:12.282323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 06:15:12.938919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 06:15:12.949180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 06:15:12.949851 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 06:15:12.956171 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 06:15:12.973756 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 06:15:12.987731 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 06:15:12.998818 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 06:15:13.021000 audit: BPF prog-id=58 op=LOAD Jan 28 06:15:13.029262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 06:15:13.045000 audit: BPF prog-id=59 op=LOAD Jan 28 06:15:13.107158 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 06:15:13.134375 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 06:15:13.185000 audit[1523]: SYSTEM_BOOT pid=1523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.209910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 06:15:13.210217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 06:15:13.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.214181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 06:15:13.214529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 06:15:13.226912 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 06:15:13.227357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 06:15:13.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.230547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 06:15:13.230864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 06:15:13.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.244621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 06:15:13.247228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 06:15:13.305577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 06:15:13.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:13.359000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 06:15:13.359000 audit[1531]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1cb4f720 a2=420 a3=0 items=0 ppid=1497 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:13.359000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:15:13.383813 augenrules[1531]: No rules Jan 28 06:15:13.375716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 06:15:13.376146 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 06:15:13.388151 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:15:13.388565 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:15:13.422790 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 06:15:13.509196 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 06:15:13.608106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 06:15:13.613622 kernel: kvm_amd: TSC scaling supported Jan 28 06:15:13.613684 kernel: kvm_amd: Nested Virtualization enabled Jan 28 06:15:13.613709 kernel: kvm_amd: Nested Paging enabled Jan 28 06:15:13.613723 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 06:15:13.623322 kernel: kvm_amd: PMU virtualization is disabled Jan 28 06:15:13.650697 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 06:15:13.910586 systemd-networkd[1516]: lo: Link UP Jan 28 06:15:13.910597 systemd-networkd[1516]: lo: Gained carrier Jan 28 06:15:13.917293 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 06:15:13.918346 systemd[1]: Reached target network.target - Network. Jan 28 06:15:13.929338 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:15:13.929343 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 06:15:13.933916 systemd-networkd[1516]: eth0: Link UP Jan 28 06:15:13.935645 systemd-networkd[1516]: eth0: Gained carrier Jan 28 06:15:13.935664 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 06:15:13.937243 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 06:15:13.941677 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 06:15:14.005635 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 06:15:14.020623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 06:15:14.021127 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 06:15:14.053667 systemd-timesyncd[1520]: Network configuration changed, trying to establish connection. Jan 28 06:15:14.485699 systemd-resolved[1299]: Clock change detected. Flushing caches. Jan 28 06:15:14.486152 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 06:15:14.487669 systemd-timesyncd[1520]: Initial clock synchronization to Wed 2026-01-28 06:15:14.485423 UTC. Jan 28 06:15:14.525123 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 06:15:14.562560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 06:15:14.613467 kernel: EDAC MC: Ver: 3.0.0 Jan 28 06:15:16.211685 systemd-networkd[1516]: eth0: Gained IPv6LL Jan 28 06:15:16.223497 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 06:15:16.236595 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 06:15:16.691080 ldconfig[1511]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 06:15:16.703838 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 06:15:16.717370 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 06:15:16.870508 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 06:15:16.887865 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 06:15:16.922473 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 06:15:16.947706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 06:15:16.972413 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 06:15:16.994937 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 06:15:17.025428 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 06:15:17.050106 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 06:15:17.082176 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 06:15:17.100633 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 06:15:17.126702 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 06:15:17.127063 systemd[1]: Reached target paths.target - Path Units. Jan 28 06:15:17.142531 systemd[1]: Reached target timers.target - Timer Units. Jan 28 06:15:17.201347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 06:15:17.231922 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 06:15:17.328600 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 06:15:17.395146 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 06:15:17.430132 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 06:15:17.498106 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 06:15:17.532998 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 06:15:17.625308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 06:15:17.658662 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 06:15:17.673344 systemd[1]: Reached target basic.target - Basic System. Jan 28 06:15:17.699851 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:15:17.721881 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 06:15:17.828632 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 06:15:17.878982 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 06:15:17.999941 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 06:15:18.038939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 06:15:18.059916 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 06:15:18.101413 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 06:15:18.132145 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 06:15:18.133656 jq[1568]: false Jan 28 06:15:18.142110 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 06:15:18.163357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:15:18.181491 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 06:15:18.192863 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 06:15:18.255490 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 06:15:18.269740 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 06:15:18.294130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 06:15:18.344917 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jan 28 06:15:18.346601 extend-filesystems[1569]: Found /dev/vda6 Jan 28 06:15:18.343405 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jan 28 06:15:18.347460 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 06:15:18.365521 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 06:15:18.370057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 06:15:18.374507 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 06:15:18.385170 extend-filesystems[1569]: Found /dev/vda9 Jan 28 06:15:18.409916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 06:15:18.425397 extend-filesystems[1569]: Checking size of /dev/vda9 Jan 28 06:15:18.462736 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 06:15:18.488733 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 06:15:18.494568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 06:15:18.503408 jq[1586]: true Jan 28 06:15:18.524502 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 06:15:18.525045 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 06:15:18.540095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 06:15:18.542025 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 06:15:18.544947 extend-filesystems[1569]: Resized partition /dev/vda9 Jan 28 06:15:18.568581 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Jan 28 06:15:18.568581 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:15:18.568581 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Jan 28 06:15:18.550493 oslogin_cache_refresh[1570]: Failure getting users, quitting Jan 28 06:15:18.550644 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 06:15:18.550702 oslogin_cache_refresh[1570]: Refreshing group entry cache Jan 28 06:15:18.579954 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Jan 28 06:15:18.579923 oslogin_cache_refresh[1570]: Failure getting groups, quitting Jan 28 06:15:18.580138 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:15:18.579985 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 06:15:18.595927 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 06:15:18.599182 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 06:15:18.642601 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 06:15:18.687859 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 06:15:18.697661 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 06:15:18.722730 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 06:15:18.724112 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 06:15:18.730968 update_engine[1584]: I20260128 06:15:18.730744 1584 main.cc:92] Flatcar Update Engine starting Jan 28 06:15:18.785403 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 06:15:18.833505 tar[1605]: linux-amd64/LICENSE Jan 28 06:15:18.833505 tar[1605]: linux-amd64/helm Jan 28 06:15:18.841043 jq[1607]: true Jan 28 06:15:18.856938 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 06:15:18.925697 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 06:15:18.925697 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 06:15:18.925697 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 06:15:19.034010 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Jan 28 06:15:18.942122 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 06:15:18.942671 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 06:15:19.169025 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 06:15:19.194884 update_engine[1584]: I20260128 06:15:19.192902 1584 update_check_scheduler.cc:74] Next update check in 2m43s Jan 28 06:15:19.168583 dbus-daemon[1566]: [system] SELinux support is enabled Jan 28 06:15:19.195189 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 06:15:19.182034 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 06:15:19.182068 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 06:15:19.230750 bash[1658]: Updated "/home/core/.ssh/authorized_keys" Jan 28 06:15:19.213069 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 06:15:19.213092 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 06:15:19.245040 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 06:15:19.245067 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 06:15:19.247901 systemd-logind[1580]: New seat seat0. Jan 28 06:15:19.249124 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 06:15:19.260675 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 06:15:19.276594 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 06:15:19.280138 systemd[1]: Started update-engine.service - Update Engine. Jan 28 06:15:19.281035 dbus-daemon[1566]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 06:15:19.290889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 06:15:19.312523 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 06:15:19.330604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 06:15:19.404575 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 06:15:19.405085 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 06:15:19.430914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 06:15:19.457857 locksmithd[1676]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 06:15:19.476098 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 06:15:19.501969 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 06:15:19.529577 containerd[1608]: time="2026-01-28T06:15:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 06:15:19.531949 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 06:15:19.542875 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 06:15:19.553717 containerd[1608]: time="2026-01-28T06:15:19.553593161Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.583943962Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.55µs" Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.583979699Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.584020925Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.584035743Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.584406956Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 06:15:19.584546 containerd[1608]: time="2026-01-28T06:15:19.584429338Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:15:19.584906 containerd[1608]: time="2026-01-28T06:15:19.584633388Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 06:15:19.584906 containerd[1608]: time="2026-01-28T06:15:19.584653837Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.585161 containerd[1608]: time="2026-01-28T06:15:19.585059093Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.585161 containerd[1608]: time="2026-01-28T06:15:19.585078670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:15:19.585161 containerd[1608]: time="2026-01-28T06:15:19.585093107Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 06:15:19.585161 containerd[1608]: time="2026-01-28T06:15:19.585104548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.588529651Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.588551272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.588704357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589090628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589128939Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589143597Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589179073Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589707029Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 06:15:19.590387 containerd[1608]: time="2026-01-28T06:15:19.589889038Z" level=info msg="metadata content store policy set" policy=shared Jan 28 06:15:19.602088 containerd[1608]: time="2026-01-28T06:15:19.601119901Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603010930Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603655614Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603672525Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603686732Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603701089Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603712029Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603721507Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603931549Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603948220Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.603958830Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.604407497Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.604424629Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 06:15:19.604187 containerd[1608]: time="2026-01-28T06:15:19.604438515Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 06:15:19.604943 containerd[1608]: time="2026-01-28T06:15:19.604556636Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605384952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605491090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605503674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605514374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605523040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605533108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605542787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605553106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605567703Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605576840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.605599853Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.610189169Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 06:15:19.609949 containerd[1608]: time="2026-01-28T06:15:19.610471827Z" level=info msg="Start snapshots syncer" Jan 28 06:15:19.611741 containerd[1608]: time="2026-01-28T06:15:19.610567194Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 06:15:19.612435 containerd[1608]: time="2026-01-28T06:15:19.612018123Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 06:15:19.612435 containerd[1608]: time="2026-01-28T06:15:19.612134229Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612405646Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612529027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612549024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612558501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612849274Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612867167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612876836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612886634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612903816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 06:15:19.612916 containerd[1608]: time="2026-01-28T06:15:19.612913854Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613014934Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613107757Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613119238Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613128686Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613136761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613146048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613156839Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613172898Z" level=info msg="runtime interface created" Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613177907Z" level=info msg="created NRI interface" Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613190461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613357282Z" level=info msg="Connect containerd service" Jan 28 06:15:19.613500 containerd[1608]: time="2026-01-28T06:15:19.613383040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 06:15:19.618843 containerd[1608]: time="2026-01-28T06:15:19.617048482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 06:15:19.853010 containerd[1608]: time="2026-01-28T06:15:19.851701579Z" level=info msg="Start subscribing containerd event" Jan 28 06:15:19.853010 containerd[1608]: time="2026-01-28T06:15:19.851973477Z" level=info msg="Start recovering state" Jan 28 06:15:19.853010 containerd[1608]: time="2026-01-28T06:15:19.852424248Z" level=info msg="Start event monitor" Jan 28 06:15:19.857953 containerd[1608]: time="2026-01-28T06:15:19.856646760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862325829Z" level=info msg="Start cni network conf syncer for default" Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862365784Z" level=info msg="Start streaming server" Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862379420Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862445994Z" level=info msg="runtime interface starting up..." Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862453147Z" level=info msg="starting plugins..." Jan 28 06:15:19.863074 containerd[1608]: time="2026-01-28T06:15:19.862505225Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 06:15:19.869445 containerd[1608]: time="2026-01-28T06:15:19.865397553Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 06:15:19.869882 containerd[1608]: time="2026-01-28T06:15:19.869386198Z" level=info msg="containerd successfully booted in 0.340818s" Jan 28 06:15:19.870496 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 06:15:20.035966 tar[1605]: linux-amd64/README.md Jan 28 06:15:20.131889 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 06:15:20.986132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:15:20.998587 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 06:15:21.010400 systemd[1]: Startup finished in 8.768s (kernel) + 24.746s (initrd) + 13.784s (userspace) = 47.299s. Jan 28 06:15:21.043999 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:15:22.018610 kubelet[1713]: E0128 06:15:22.018400 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:15:22.024606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:15:22.025070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:15:22.026067 systemd[1]: kubelet.service: Consumed 1.620s CPU time, 264.2M memory peak. Jan 28 06:15:24.560507 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 06:15:24.562769 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:36924.service - OpenSSH per-connection server daemon (10.0.0.1:36924). Jan 28 06:15:24.817397 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:24.822513 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:24.839580 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 06:15:24.842126 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 06:15:24.854730 systemd-logind[1580]: New session 1 of user core. Jan 28 06:15:24.892486 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 06:15:24.896944 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 06:15:24.926968 (systemd)[1732]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:24.935943 systemd-logind[1580]: New session 2 of user core. Jan 28 06:15:25.149708 systemd[1732]: Queued start job for default target default.target. Jan 28 06:15:25.165692 systemd[1732]: Created slice app.slice - User Application Slice. Jan 28 06:15:25.165888 systemd[1732]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 06:15:25.165905 systemd[1732]: Reached target paths.target - Paths. Jan 28 06:15:25.166036 systemd[1732]: Reached target timers.target - Timers. Jan 28 06:15:25.168666 systemd[1732]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 06:15:25.170742 systemd[1732]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 06:15:25.194785 systemd[1732]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 06:15:25.195006 systemd[1732]: Reached target sockets.target - Sockets. Jan 28 06:15:25.197175 systemd[1732]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 06:15:25.197776 systemd[1732]: Reached target basic.target - Basic System. Jan 28 06:15:25.198005 systemd[1732]: Reached target default.target - Main User Target. Jan 28 06:15:25.198045 systemd[1732]: Startup finished in 249ms. Jan 28 06:15:25.198169 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 06:15:25.209748 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 06:15:25.240747 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:36930.service - OpenSSH per-connection server daemon (10.0.0.1:36930). Jan 28 06:15:25.380533 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 36930 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:25.383789 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:25.396068 systemd-logind[1580]: New session 3 of user core. Jan 28 06:15:25.409782 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 06:15:25.444362 sshd[1750]: Connection closed by 10.0.0.1 port 36930 Jan 28 06:15:25.444982 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jan 28 06:15:25.460800 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:36930.service: Deactivated successfully. Jan 28 06:15:25.464647 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 06:15:25.468139 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 28 06:15:25.474597 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:36934.service - OpenSSH per-connection server daemon (10.0.0.1:36934). Jan 28 06:15:25.475953 systemd-logind[1580]: Removed session 3. Jan 28 06:15:25.599510 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 36934 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:25.603048 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:25.615747 systemd-logind[1580]: New session 4 of user core. Jan 28 06:15:25.629675 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 06:15:25.654964 sshd[1760]: Connection closed by 10.0.0.1 port 36934 Jan 28 06:15:25.657609 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 28 06:15:25.667613 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:36934.service: Deactivated successfully. Jan 28 06:15:25.670597 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 06:15:25.673917 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 28 06:15:25.678137 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:36950.service - OpenSSH per-connection server daemon (10.0.0.1:36950). Jan 28 06:15:25.680033 systemd-logind[1580]: Removed session 4. Jan 28 06:15:25.787585 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 36950 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:25.790703 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:25.802941 systemd-logind[1580]: New session 5 of user core. Jan 28 06:15:25.824676 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 06:15:25.862159 sshd[1771]: Connection closed by 10.0.0.1 port 36950 Jan 28 06:15:25.862945 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 28 06:15:25.873599 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:36950.service: Deactivated successfully. Jan 28 06:15:25.877517 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 06:15:25.880142 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 28 06:15:25.885493 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Jan 28 06:15:25.886558 systemd-logind[1580]: Removed session 5. Jan 28 06:15:25.996674 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:25.998905 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:26.011596 systemd-logind[1580]: New session 6 of user core. Jan 28 06:15:26.023759 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 06:15:26.077183 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 06:15:26.077979 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:15:26.112741 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 28 06:15:26.117056 sshd[1781]: Connection closed by 10.0.0.1 port 36962 Jan 28 06:15:26.117154 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 28 06:15:26.130070 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:36962.service: Deactivated successfully. Jan 28 06:15:26.134594 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 06:15:26.139577 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 28 06:15:26.145735 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:36972.service - OpenSSH per-connection server daemon (10.0.0.1:36972). Jan 28 06:15:26.147514 systemd-logind[1580]: Removed session 6. Jan 28 06:15:26.250101 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 36972 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:26.253173 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:26.265689 systemd-logind[1580]: New session 7 of user core. Jan 28 06:15:26.280754 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 06:15:26.322187 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 06:15:26.322951 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:15:26.335641 sudo[1795]: pam_unix(sudo:session): session closed for user root Jan 28 06:15:26.392390 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 06:15:26.718725 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:15:26.758095 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 06:15:26.890000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 06:15:26.892590 augenrules[1819]: No rules Jan 28 06:15:26.895462 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 06:15:26.896363 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 06:15:26.899720 kernel: kauditd_printk_skb: 110 callbacks suppressed Jan 28 06:15:26.900357 kernel: audit: type=1305 audit(1769580926.890:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 06:15:26.899606 sudo[1794]: pam_unix(sudo:session): session closed for user root Jan 28 06:15:26.910412 sshd[1793]: Connection closed by 10.0.0.1 port 36972 Jan 28 06:15:26.910130 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 28 06:15:26.890000 audit[1819]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda1d29e30 a2=420 a3=0 items=0 ppid=1800 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:26.923529 kernel: audit: type=1300 audit(1769580926.890:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda1d29e30 a2=420 a3=0 items=0 ppid=1800 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:26.890000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:15:26.968567 kernel: audit: type=1327 audit(1769580926.890:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 06:15:26.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.990515 kernel: audit: type=1130 audit(1769580926.895:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.990586 kernel: audit: type=1131 audit(1769580926.895:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.898000 audit[1794]: USER_END pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.041077 kernel: audit: type=1106 audit(1769580926.898:228): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.041546 kernel: audit: type=1104 audit(1769580926.898:229): pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.898000 audit[1794]: CRED_DISP pid=1794 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:26.912000 audit[1789]: USER_END pid=1789 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.097642 kernel: audit: type=1106 audit(1769580926.912:230): pid=1789 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:26.912000 audit[1789]: CRED_DISP pid=1789 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.124092 kernel: audit: type=1104 audit(1769580926.912:231): pid=1789 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.131518 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:36972.service: Deactivated successfully. Jan 28 06:15:27.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.25:22-10.0.0.1:36972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.135461 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 06:15:27.138425 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 28 06:15:27.142977 systemd-logind[1580]: Removed session 7. Jan 28 06:15:27.145181 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:36982.service - OpenSSH per-connection server daemon (10.0.0.1:36982). Jan 28 06:15:27.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.25:22-10.0.0.1:36982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.158457 kernel: audit: type=1131 audit(1769580927.130:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.25:22-10.0.0.1:36972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.265000 audit[1828]: USER_ACCT pid=1828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.269139 sshd[1828]: Accepted publickey for core from 10.0.0.1 port 36982 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:15:27.270000 audit[1828]: CRED_ACQ pid=1828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.270000 audit[1828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc115f4650 a2=3 a3=0 items=0 ppid=1 pid=1828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:27.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:15:27.272419 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:15:27.303965 systemd-logind[1580]: New session 8 of user core. Jan 28 06:15:27.475506 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 06:15:27.720000 audit[1828]: USER_START pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.895000 audit[1832]: CRED_ACQ pid=1832 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:15:27.957000 audit[1833]: USER_ACCT pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.957000 audit[1833]: CRED_REFR pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.958775 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 06:15:27.958000 audit[1833]: USER_START pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:15:27.959677 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 06:15:32.528781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 06:15:32.535429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:15:33.150088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:15:33.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:33.163098 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 28 06:15:33.163615 kernel: audit: type=1130 audit(1769580933.150:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:34.147385 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:15:34.656694 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 06:15:34.849375 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 06:15:35.462383 kubelet[1863]: E0128 06:15:35.461377 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:15:35.480059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:15:35.480537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:15:35.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:15:35.482014 systemd[1]: kubelet.service: Consumed 1.810s CPU time, 108.4M memory peak. Jan 28 06:15:35.516777 kernel: audit: type=1131 audit(1769580935.480:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:15:41.317742 dockerd[1872]: time="2026-01-28T06:15:41.315474930Z" level=info msg="Starting up" Jan 28 06:15:41.325873 dockerd[1872]: time="2026-01-28T06:15:41.325726184Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 06:15:41.734499 dockerd[1872]: time="2026-01-28T06:15:41.732885534Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 06:15:42.028622 systemd[1]: var-lib-docker-metacopy\x2dcheck2093852487-merged.mount: Deactivated successfully. Jan 28 06:15:42.272896 dockerd[1872]: time="2026-01-28T06:15:42.271935923Z" level=info msg="Loading containers: start." Jan 28 06:15:42.425788 kernel: Initializing XFRM netlink socket Jan 28 06:15:42.941000 audit[1928]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.941000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff3b57caa0 a2=0 a3=0 items=0 ppid=1872 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.011611 kernel: audit: type=1325 audit(1769580942.941:244): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.011715 kernel: audit: type=1300 audit(1769580942.941:244): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff3b57caa0 a2=0 a3=0 items=0 ppid=1872 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.011740 kernel: audit: type=1327 audit(1769580942.941:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 06:15:42.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 06:15:42.954000 audit[1930]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.047613 kernel: audit: type=1325 audit(1769580942.954:245): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.047680 kernel: audit: type=1300 audit(1769580942.954:245): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff30426cd0 a2=0 a3=0 items=0 ppid=1872 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:42.954000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff30426cd0 a2=0 a3=0 items=0 ppid=1872 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.084602 kernel: audit: type=1327 audit(1769580942.954:245): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 06:15:42.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 06:15:43.108476 kernel: audit: type=1325 audit(1769580942.973:246): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.973000 audit[1932]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.973000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7040bcc0 a2=0 a3=0 items=0 ppid=1872 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.171893 kernel: audit: type=1300 audit(1769580942.973:246): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7040bcc0 a2=0 a3=0 items=0 ppid=1872 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.173866 kernel: audit: type=1327 audit(1769580942.973:246): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 06:15:42.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 06:15:42.983000 audit[1934]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.983000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9b2f91e0 a2=0 a3=0 items=0 ppid=1872 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.225530 kernel: audit: type=1325 audit(1769580942.983:247): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 06:15:42.998000 audit[1936]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:42.998000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe779e3ac0 a2=0 a3=0 items=0 ppid=1872 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:42.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 06:15:43.010000 audit[1938]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.010000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffad2c7260 a2=0 a3=0 items=0 ppid=1872 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:15:43.025000 audit[1940]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.025000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4ce926c0 a2=0 a3=0 items=0 ppid=1872 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.025000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:15:43.037000 audit[1942]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.037000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcf6cf1990 a2=0 a3=0 items=0 ppid=1872 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 06:15:43.516000 audit[1945]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.516000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffce07d71f0 a2=0 a3=0 items=0 ppid=1872 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 06:15:43.532000 audit[1947]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.532000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff684cab20 a2=0 a3=0 items=0 ppid=1872 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.532000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 06:15:43.548000 audit[1949]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.548000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffe38f8df0 a2=0 a3=0 items=0 ppid=1872 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 06:15:43.568000 audit[1951]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.568000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd6b3ca850 a2=0 a3=0 items=0 ppid=1872 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:15:43.581000 audit[1953]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:43.581000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe70824ed0 a2=0 a3=0 items=0 ppid=1872 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 06:15:43.875000 audit[1983]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.875000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc18c43080 a2=0 a3=0 items=0 ppid=1872 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 06:15:43.891000 audit[1985]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.891000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd60a51880 a2=0 a3=0 items=0 ppid=1872 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 06:15:43.908000 audit[1987]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.908000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe31853260 a2=0 a3=0 items=0 ppid=1872 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 06:15:43.923000 audit[1989]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.923000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7047e580 a2=0 a3=0 items=0 ppid=1872 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 06:15:43.938000 audit[1991]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.938000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff18302400 a2=0 a3=0 items=0 ppid=1872 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 06:15:43.950000 audit[1993]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.950000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd711ba20 a2=0 a3=0 items=0 ppid=1872 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:15:43.962000 audit[1995]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.962000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff874a44a0 a2=0 a3=0 items=0 ppid=1872 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:15:43.976000 audit[1997]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.976000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffce897df60 a2=0 a3=0 items=0 ppid=1872 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 06:15:43.993000 audit[1999]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:43.993000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffee5ecf790 a2=0 a3=0 items=0 ppid=1872 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:43.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 06:15:44.006000 audit[2001]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.006000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcbcfe1bf0 a2=0 a3=0 items=0 ppid=1872 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 06:15:44.020000 audit[2003]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.020000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdba826320 a2=0 a3=0 items=0 ppid=1872 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 06:15:44.045000 audit[2005]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.045000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcb52e0590 a2=0 a3=0 items=0 ppid=1872 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 06:15:44.063000 audit[2007]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.063000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc1f9aaee0 a2=0 a3=0 items=0 ppid=1872 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 06:15:44.107000 audit[2012]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.107000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd48e7d150 a2=0 a3=0 items=0 ppid=1872 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 06:15:44.125000 audit[2014]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.125000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdff339be0 a2=0 a3=0 items=0 ppid=1872 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 06:15:44.150000 audit[2016]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.150000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcd7244540 a2=0 a3=0 items=0 ppid=1872 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 06:15:44.181000 audit[2018]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.181000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbee6c080 a2=0 a3=0 items=0 ppid=1872 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 06:15:44.196000 audit[2020]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.196000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd30ebbde0 a2=0 a3=0 items=0 ppid=1872 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 06:15:44.227000 audit[2022]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:15:44.227000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc60c6a530 a2=0 a3=0 items=0 ppid=1872 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 06:15:44.345000 audit[2026]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.345000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffed7c8b240 a2=0 a3=0 items=0 ppid=1872 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 06:15:44.372000 audit[2028]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.372000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffed8562850 a2=0 a3=0 items=0 ppid=1872 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 06:15:44.446000 audit[2036]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.446000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffefec604a0 a2=0 a3=0 items=0 ppid=1872 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 06:15:44.559000 audit[2042]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.559000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff9211aff0 a2=0 a3=0 items=0 ppid=1872 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 06:15:44.591000 audit[2044]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.591000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe50642ea0 a2=0 a3=0 items=0 ppid=1872 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.591000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 06:15:44.618000 audit[2046]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.618000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf648cfe0 a2=0 a3=0 items=0 ppid=1872 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 06:15:44.645000 audit[2048]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.645000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe35a3410 a2=0 a3=0 items=0 ppid=1872 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 06:15:44.669000 audit[2050]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:15:44.669000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7e302cb0 a2=0 a3=0 items=0 ppid=1872 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:15:44.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 06:15:44.673826 systemd-networkd[1516]: docker0: Link UP Jan 28 06:15:44.717522 dockerd[1872]: time="2026-01-28T06:15:44.699655290Z" level=info msg="Loading containers: done." Jan 28 06:15:44.952703 dockerd[1872]: time="2026-01-28T06:15:44.952452624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 06:15:44.953816 dockerd[1872]: time="2026-01-28T06:15:44.953599135Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 06:15:44.954661 dockerd[1872]: time="2026-01-28T06:15:44.954537235Z" level=info msg="Initializing buildkit" Jan 28 06:15:45.240528 dockerd[1872]: time="2026-01-28T06:15:45.239114184Z" level=info msg="Completed buildkit initialization" Jan 28 06:15:45.382770 dockerd[1872]: time="2026-01-28T06:15:45.382436409Z" level=info msg="Daemon has completed initialization" Jan 28 06:15:45.384140 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 06:15:45.384840 dockerd[1872]: time="2026-01-28T06:15:45.382897650Z" level=info msg="API listen on /run/docker.sock" Jan 28 06:15:45.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:46.040496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 06:15:46.045881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:15:46.614379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:15:46.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:46.660911 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:15:47.816367 kubelet[2102]: E0128 06:15:47.815522 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:15:47.824967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:15:47.825681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:15:47.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:15:47.828765 systemd[1]: kubelet.service: Consumed 2.018s CPU time, 110M memory peak. Jan 28 06:15:49.291183 containerd[1608]: time="2026-01-28T06:15:49.289832996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 06:15:50.415859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625959329.mount: Deactivated successfully. Jan 28 06:15:55.868392 containerd[1608]: time="2026-01-28T06:15:55.867878003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:15:55.870765 containerd[1608]: time="2026-01-28T06:15:55.870719459Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 28 06:15:55.931975 containerd[1608]: time="2026-01-28T06:15:55.931655493Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:15:55.952186 containerd[1608]: time="2026-01-28T06:15:55.951563443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:15:55.953638 containerd[1608]: time="2026-01-28T06:15:55.952758244Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 6.662883852s" Jan 28 06:15:55.953638 containerd[1608]: time="2026-01-28T06:15:55.953111988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 06:15:56.000763 containerd[1608]: time="2026-01-28T06:15:56.000417277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 06:15:57.887511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 06:15:57.897600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:15:58.771066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:15:58.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:58.778330 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 28 06:15:58.778388 kernel: audit: type=1130 audit(1769580958.771:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:15:58.819655 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:15:59.396777 kubelet[2182]: E0128 06:15:59.395476 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:15:59.405096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:15:59.405552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:15:59.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:15:59.407691 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 113.5M memory peak. Jan 28 06:15:59.435726 kernel: audit: type=1131 audit(1769580959.407:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:03.170401 containerd[1608]: time="2026-01-28T06:16:03.169679441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:03.172559 containerd[1608]: time="2026-01-28T06:16:03.172122636Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 28 06:16:03.178533 containerd[1608]: time="2026-01-28T06:16:03.178492763Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:03.191543 containerd[1608]: time="2026-01-28T06:16:03.190989051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:03.194148 containerd[1608]: time="2026-01-28T06:16:03.193732119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 7.193187146s" Jan 28 06:16:03.194148 containerd[1608]: time="2026-01-28T06:16:03.193981391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 06:16:03.210846 containerd[1608]: time="2026-01-28T06:16:03.210664887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 06:16:04.725672 update_engine[1584]: I20260128 06:16:04.721119 1584 update_attempter.cc:509] Updating boot flags... Jan 28 06:16:09.701042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 06:16:09.793849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:11.003617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:11.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:11.046522 kernel: audit: type=1130 audit(1769580971.009:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:11.089412 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:16:11.463567 containerd[1608]: time="2026-01-28T06:16:11.462378132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:11.467774 containerd[1608]: time="2026-01-28T06:16:11.467743220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 28 06:16:11.474419 containerd[1608]: time="2026-01-28T06:16:11.473691876Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:11.482536 containerd[1608]: time="2026-01-28T06:16:11.482508705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:11.485394 containerd[1608]: time="2026-01-28T06:16:11.484991087Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 8.274296465s" Jan 28 06:16:11.485394 containerd[1608]: time="2026-01-28T06:16:11.485165310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 06:16:11.493770 containerd[1608]: time="2026-01-28T06:16:11.493747067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 06:16:12.053832 kubelet[2217]: E0128 06:16:12.052724 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:16:12.122011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:16:12.123483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:16:12.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:12.141115 systemd[1]: kubelet.service: Consumed 1.753s CPU time, 110.9M memory peak. Jan 28 06:16:12.173687 kernel: audit: type=1131 audit(1769580972.140:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:19.148808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370026987.mount: Deactivated successfully. Jan 28 06:16:22.136097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 06:16:22.148162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:22.942059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:22.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:22.975123 kernel: audit: type=1130 audit(1769580982.941:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:22.988884 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:16:23.176454 kubelet[2241]: E0128 06:16:23.176381 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:16:23.181612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:16:23.181912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:16:23.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:23.185883 systemd[1]: kubelet.service: Consumed 897ms CPU time, 110.5M memory peak. Jan 28 06:16:23.220530 kernel: audit: type=1131 audit(1769580983.184:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:23.699542 containerd[1608]: time="2026-01-28T06:16:23.698423292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:23.701792 containerd[1608]: time="2026-01-28T06:16:23.701124441Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 28 06:16:23.704644 containerd[1608]: time="2026-01-28T06:16:23.704484878Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:23.711826 containerd[1608]: time="2026-01-28T06:16:23.711767476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:23.713466 containerd[1608]: time="2026-01-28T06:16:23.712860199Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 12.217812633s" Jan 28 06:16:23.713466 containerd[1608]: time="2026-01-28T06:16:23.713112800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 06:16:23.719086 containerd[1608]: time="2026-01-28T06:16:23.718738303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 06:16:24.672841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888857070.mount: Deactivated successfully. Jan 28 06:16:27.973465 containerd[1608]: time="2026-01-28T06:16:27.972792873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:27.975613 containerd[1608]: time="2026-01-28T06:16:27.974895804Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18204480" Jan 28 06:16:27.978890 containerd[1608]: time="2026-01-28T06:16:27.978605133Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:27.992915 containerd[1608]: time="2026-01-28T06:16:27.992675850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:27.999923 containerd[1608]: time="2026-01-28T06:16:27.999694131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.280815877s" Jan 28 06:16:27.999923 containerd[1608]: time="2026-01-28T06:16:27.999740127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 06:16:28.020796 containerd[1608]: time="2026-01-28T06:16:28.019718828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 06:16:29.195880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728972384.mount: Deactivated successfully. Jan 28 06:16:29.269530 containerd[1608]: time="2026-01-28T06:16:29.268767816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:16:29.329693 containerd[1608]: time="2026-01-28T06:16:29.269328747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 06:16:29.356950 containerd[1608]: time="2026-01-28T06:16:29.356654702Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:16:29.382493 containerd[1608]: time="2026-01-28T06:16:29.381792058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 06:16:29.383690 containerd[1608]: time="2026-01-28T06:16:29.383542522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.363368865s" Jan 28 06:16:29.383690 containerd[1608]: time="2026-01-28T06:16:29.383576395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 06:16:29.396906 containerd[1608]: time="2026-01-28T06:16:29.396593535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 06:16:31.218577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377763751.mount: Deactivated successfully. Jan 28 06:16:33.382490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 06:16:33.392180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:34.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:34.381766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:34.421712 kernel: audit: type=1130 audit(1769580994.380:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:34.438642 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:16:35.126708 kubelet[2368]: E0128 06:16:35.126486 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:16:35.136685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:16:35.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:35.136974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:16:35.142751 systemd[1]: kubelet.service: Consumed 1.431s CPU time, 109.8M memory peak. Jan 28 06:16:35.168697 kernel: audit: type=1131 audit(1769580995.136:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:42.558327 containerd[1608]: time="2026-01-28T06:16:42.557841153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:42.560665 containerd[1608]: time="2026-01-28T06:16:42.559800794Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55731390" Jan 28 06:16:42.561991 containerd[1608]: time="2026-01-28T06:16:42.561949668Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:42.568586 containerd[1608]: time="2026-01-28T06:16:42.568458566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:16:42.569269 containerd[1608]: time="2026-01-28T06:16:42.569026205Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 13.17239469s" Jan 28 06:16:42.569408 containerd[1608]: time="2026-01-28T06:16:42.569309824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 06:16:45.388179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 06:16:45.399539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:45.953861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:45.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:45.972340 kernel: audit: type=1130 audit(1769581005.953:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:45.978668 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 06:16:46.150505 kubelet[2410]: E0128 06:16:46.150360 2410 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 06:16:46.154631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 06:16:46.154890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 06:16:46.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:46.155826 systemd[1]: kubelet.service: Consumed 641ms CPU time, 109M memory peak. Jan 28 06:16:46.173519 kernel: audit: type=1131 audit(1769581006.154:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:48.147441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:48.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:48.147745 systemd[1]: kubelet.service: Consumed 641ms CPU time, 109M memory peak. Jan 28 06:16:48.152101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:48.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:48.187697 kernel: audit: type=1130 audit(1769581008.146:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:48.187770 kernel: audit: type=1131 audit(1769581008.146:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:48.220813 systemd[1]: Reload requested from client PID 2426 ('systemctl') (unit session-8.scope)... Jan 28 06:16:48.221043 systemd[1]: Reloading... Jan 28 06:16:48.488574 zram_generator::config[2504]: No configuration found. Jan 28 06:16:49.351125 systemd[1]: Reloading finished in 1129 ms. Jan 28 06:16:49.384000 audit: BPF prog-id=63 op=LOAD Jan 28 06:16:49.392448 kernel: audit: type=1334 audit(1769581009.384:299): prog-id=63 op=LOAD Jan 28 06:16:49.384000 audit: BPF prog-id=60 op=UNLOAD Jan 28 06:16:49.399349 kernel: audit: type=1334 audit(1769581009.384:300): prog-id=60 op=UNLOAD Jan 28 06:16:49.385000 audit: BPF prog-id=64 op=LOAD Jan 28 06:16:49.385000 audit: BPF prog-id=65 op=LOAD Jan 28 06:16:49.419563 kernel: audit: type=1334 audit(1769581009.385:301): prog-id=64 op=LOAD Jan 28 06:16:49.419648 kernel: audit: type=1334 audit(1769581009.385:302): prog-id=65 op=LOAD Jan 28 06:16:49.419680 kernel: audit: type=1334 audit(1769581009.385:303): prog-id=61 op=UNLOAD Jan 28 06:16:49.385000 audit: BPF prog-id=61 op=UNLOAD Jan 28 06:16:49.385000 audit: BPF prog-id=62 op=UNLOAD Jan 28 06:16:49.431889 kernel: audit: type=1334 audit(1769581009.385:304): prog-id=62 op=UNLOAD Jan 28 06:16:49.387000 audit: BPF prog-id=66 op=LOAD Jan 28 06:16:49.387000 audit: BPF prog-id=49 op=UNLOAD Jan 28 06:16:49.387000 audit: BPF prog-id=67 op=LOAD Jan 28 06:16:49.387000 audit: BPF prog-id=68 op=LOAD Jan 28 06:16:49.387000 audit: BPF prog-id=50 op=UNLOAD Jan 28 06:16:49.387000 audit: BPF prog-id=51 op=UNLOAD Jan 28 06:16:49.388000 audit: BPF prog-id=69 op=LOAD Jan 28 06:16:49.388000 audit: BPF prog-id=46 op=UNLOAD Jan 28 06:16:49.388000 audit: BPF prog-id=70 op=LOAD Jan 28 06:16:49.388000 audit: BPF prog-id=71 op=LOAD Jan 28 06:16:49.388000 audit: BPF prog-id=47 op=UNLOAD Jan 28 06:16:49.388000 audit: BPF prog-id=48 op=UNLOAD Jan 28 06:16:49.389000 audit: BPF prog-id=72 op=LOAD Jan 28 06:16:49.389000 audit: BPF prog-id=43 op=UNLOAD Jan 28 06:16:49.389000 audit: BPF prog-id=73 op=LOAD Jan 28 06:16:49.389000 audit: BPF prog-id=74 op=LOAD Jan 28 06:16:49.389000 audit: BPF prog-id=44 op=UNLOAD Jan 28 06:16:49.389000 audit: BPF prog-id=45 op=UNLOAD Jan 28 06:16:49.392000 audit: BPF prog-id=75 op=LOAD Jan 28 06:16:49.392000 audit: BPF prog-id=59 op=UNLOAD Jan 28 06:16:49.393000 audit: BPF prog-id=76 op=LOAD Jan 28 06:16:49.393000 audit: BPF prog-id=55 op=UNLOAD Jan 28 06:16:49.393000 audit: BPF prog-id=77 op=LOAD Jan 28 06:16:49.393000 audit: BPF prog-id=78 op=LOAD Jan 28 06:16:49.393000 audit: BPF prog-id=56 op=UNLOAD Jan 28 06:16:49.393000 audit: BPF prog-id=57 op=UNLOAD Jan 28 06:16:49.394000 audit: BPF prog-id=79 op=LOAD Jan 28 06:16:49.394000 audit: BPF prog-id=80 op=LOAD Jan 28 06:16:49.394000 audit: BPF prog-id=53 op=UNLOAD Jan 28 06:16:49.394000 audit: BPF prog-id=54 op=UNLOAD Jan 28 06:16:49.396000 audit: BPF prog-id=81 op=LOAD Jan 28 06:16:49.396000 audit: BPF prog-id=58 op=UNLOAD Jan 28 06:16:49.397000 audit: BPF prog-id=82 op=LOAD Jan 28 06:16:49.397000 audit: BPF prog-id=52 op=UNLOAD Jan 28 06:16:49.443949 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 06:16:49.444396 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 06:16:49.445369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:49.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 06:16:49.445683 systemd[1]: kubelet.service: Consumed 259ms CPU time, 98.4M memory peak. Jan 28 06:16:49.449619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:16:49.883771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:16:49.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:16:49.923436 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:16:50.085725 kubelet[2517]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:16:50.085725 kubelet[2517]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:16:50.085725 kubelet[2517]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:16:50.085725 kubelet[2517]: I0128 06:16:50.085525 2517 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:16:50.821411 kubelet[2517]: I0128 06:16:50.821034 2517 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 06:16:50.821411 kubelet[2517]: I0128 06:16:50.821099 2517 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:16:50.822588 kubelet[2517]: I0128 06:16:50.821901 2517 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 06:16:50.887728 kubelet[2517]: I0128 06:16:50.887377 2517 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:16:50.887728 kubelet[2517]: E0128 06:16:50.887456 2517 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:50.919035 kubelet[2517]: I0128 06:16:50.918737 2517 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:16:50.948493 kubelet[2517]: I0128 06:16:50.948020 2517 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:16:50.956867 kubelet[2517]: I0128 06:16:50.956454 2517 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:16:50.957571 kubelet[2517]: I0128 06:16:50.956620 2517 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:16:50.957571 kubelet[2517]: I0128 06:16:50.957531 2517 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:16:50.957571 kubelet[2517]: I0128 06:16:50.957546 2517 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 06:16:50.964826 kubelet[2517]: I0128 06:16:50.957992 2517 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:16:50.966721 kubelet[2517]: I0128 06:16:50.966474 2517 kubelet.go:446] "Attempting to sync node with API server" Jan 28 06:16:50.967075 kubelet[2517]: I0128 06:16:50.966846 2517 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:16:50.967075 kubelet[2517]: I0128 06:16:50.967013 2517 kubelet.go:352] "Adding apiserver pod source" Jan 28 06:16:50.967664 kubelet[2517]: I0128 06:16:50.967403 2517 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:16:50.970970 kubelet[2517]: W0128 06:16:50.970567 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:50.970970 kubelet[2517]: E0128 06:16:50.970805 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:50.971742 kubelet[2517]: W0128 06:16:50.971477 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:50.971742 kubelet[2517]: E0128 06:16:50.971599 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:50.974860 kubelet[2517]: I0128 06:16:50.974745 2517 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 06:16:50.976685 kubelet[2517]: I0128 06:16:50.976497 2517 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 06:16:50.978122 kubelet[2517]: W0128 06:16:50.977803 2517 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 06:16:50.982494 kubelet[2517]: I0128 06:16:50.982117 2517 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:16:50.982763 kubelet[2517]: I0128 06:16:50.982649 2517 server.go:1287] "Started kubelet" Jan 28 06:16:50.988661 kubelet[2517]: I0128 06:16:50.986774 2517 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:16:50.990031 kubelet[2517]: I0128 06:16:50.989643 2517 server.go:479] "Adding debug handlers to kubelet server" Jan 28 06:16:50.991647 kubelet[2517]: I0128 06:16:50.990696 2517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:16:50.991719 kubelet[2517]: I0128 06:16:50.991650 2517 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:16:50.993996 kubelet[2517]: I0128 06:16:50.991786 2517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:16:50.993996 kubelet[2517]: I0128 06:16:50.992805 2517 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:16:50.995635 kubelet[2517]: E0128 06:16:50.995015 2517 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 06:16:50.995635 kubelet[2517]: I0128 06:16:50.995434 2517 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:16:50.995825 kubelet[2517]: I0128 06:16:50.995688 2517 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:16:50.995825 kubelet[2517]: I0128 06:16:50.995734 2517 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:16:50.998467 kubelet[2517]: W0128 06:16:50.996897 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:50.998467 kubelet[2517]: E0128 06:16:50.997029 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:50.998853 kubelet[2517]: E0128 06:16:50.998827 2517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Jan 28 06:16:50.999515 kubelet[2517]: I0128 06:16:50.999430 2517 factory.go:221] Registration of the systemd container factory successfully Jan 28 06:16:50.999515 kubelet[2517]: I0128 06:16:50.999500 2517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:16:51.037507 kubelet[2517]: I0128 06:16:51.036811 2517 factory.go:221] Registration of the containerd container factory successfully Jan 28 06:16:51.041100 kubelet[2517]: E0128 06:16:51.040810 2517 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:16:51.046598 kubelet[2517]: E0128 06:16:51.041790 2517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ed08dc8e3f1a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 06:16:50.98253149 +0000 UTC m=+1.047974348,LastTimestamp:2026-01-28 06:16:50.98253149 +0000 UTC m=+1.047974348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 06:16:51.233711 kubelet[2517]: E0128 06:16:51.233063 2517 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 06:16:51.233711 kubelet[2517]: E0128 06:16:51.233677 2517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Jan 28 06:16:51.249554 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 28 06:16:51.249720 kernel: audit: type=1325 audit(1769581011.238:341): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.238000 audit[2532]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.238000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7d3d4720 a2=0 a3=0 items=0 ppid=2517 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.291130 kernel: audit: type=1300 audit(1769581011.238:341): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7d3d4720 a2=0 a3=0 items=0 ppid=2517 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 06:16:51.301797 kubelet[2517]: I0128 06:16:51.301766 2517 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:16:51.301918 kubelet[2517]: I0128 06:16:51.301901 2517 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:16:51.302554 kubelet[2517]: I0128 06:16:51.301986 2517 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:16:51.313010 kernel: audit: type=1327 audit(1769581011.238:341): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 06:16:51.313101 kernel: audit: type=1325 audit(1769581011.253:342): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.253000 audit[2533]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.337905 kernel: audit: type=1300 audit(1769581011.253:342): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfee24690 a2=0 a3=0 items=0 ppid=2517 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.253000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfee24690 a2=0 a3=0 items=0 ppid=2517 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.341510 kubelet[2517]: E0128 06:16:51.339670 2517 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 06:16:51.366054 kubelet[2517]: I0128 06:16:51.365039 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 06:16:51.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 06:16:51.387527 kernel: audit: type=1327 audit(1769581011.253:342): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 06:16:51.392415 kernel: audit: type=1325 audit(1769581011.265:343): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.265000 audit[2536]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.413422 kernel: audit: type=1300 audit(1769581011.265:343): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc43e55290 a2=0 a3=0 items=0 ppid=2517 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.265000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc43e55290 a2=0 a3=0 items=0 ppid=2517 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.433954 kubelet[2517]: I0128 06:16:51.431565 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 06:16:51.436861 kernel: audit: type=1327 audit(1769581011.265:343): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:16:51.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:16:51.436942 kubelet[2517]: I0128 06:16:51.434896 2517 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 06:16:51.441371 kubelet[2517]: I0128 06:16:51.438382 2517 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:16:51.441371 kubelet[2517]: I0128 06:16:51.438491 2517 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 06:16:51.441371 kubelet[2517]: E0128 06:16:51.438780 2517 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:16:51.443019 kubelet[2517]: I0128 06:16:51.443001 2517 policy_none.go:49] "None policy: Start" Jan 28 06:16:51.443358 kubelet[2517]: I0128 06:16:51.443340 2517 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:16:51.443441 kubelet[2517]: I0128 06:16:51.443429 2517 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:16:51.445466 kubelet[2517]: E0128 06:16:51.445446 2517 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 06:16:51.448844 kernel: audit: type=1325 audit(1769581011.274:344): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.274000 audit[2539]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.454412 kubelet[2517]: W0128 06:16:51.454105 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:51.454575 kubelet[2517]: E0128 06:16:51.454550 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:51.274000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffffd1e75c0 a2=0 a3=0 items=0 ppid=2517 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:16:51.362000 audit[2543]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.362000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd4270530 a2=0 a3=0 items=0 ppid=2517 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 06:16:51.425000 audit[2545]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:16:51.425000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec881eae0 a2=0 a3=0 items=0 ppid=2517 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 06:16:51.426000 audit[2546]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.426000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe89fe47d0 a2=0 a3=0 items=0 ppid=2517 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 06:16:51.440000 audit[2548]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:16:51.440000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeed4578f0 a2=0 a3=0 items=0 ppid=2517 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 06:16:51.454000 audit[2547]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.454000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef58bbd80 a2=0 a3=0 items=0 ppid=2517 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 06:16:51.455000 audit[2549]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:16:51.455000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef7710f70 a2=0 a3=0 items=0 ppid=2517 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 06:16:51.459000 audit[2551]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:16:51.459000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc84599720 a2=0 a3=0 items=0 ppid=2517 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 06:16:51.463000 audit[2552]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:16:51.463000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc66d76350 a2=0 a3=0 items=0 ppid=2517 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:51.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 06:16:51.475708 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 06:16:51.499795 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 06:16:51.510880 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 06:16:51.526779 kubelet[2517]: I0128 06:16:51.526746 2517 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 06:16:51.527080 kubelet[2517]: I0128 06:16:51.527062 2517 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:16:51.527486 kubelet[2517]: I0128 06:16:51.527368 2517 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:16:51.527785 kubelet[2517]: I0128 06:16:51.527771 2517 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:16:51.532518 kubelet[2517]: E0128 06:16:51.532015 2517 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:16:51.532518 kubelet[2517]: E0128 06:16:51.532120 2517 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 06:16:51.562974 systemd[1]: Created slice kubepods-burstable-podb468431ede66e7319c2543fbcd5789c0.slice - libcontainer container kubepods-burstable-podb468431ede66e7319c2543fbcd5789c0.slice. Jan 28 06:16:51.584667 kubelet[2517]: E0128 06:16:51.584037 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:51.590605 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 06:16:51.637816 kubelet[2517]: E0128 06:16:51.637757 2517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Jan 28 06:16:51.637816 kubelet[2517]: I0128 06:16:51.637871 2517 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:16:51.640558 kubelet[2517]: E0128 06:16:51.640020 2517 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 28 06:16:51.645828 kubelet[2517]: I0128 06:16:51.644543 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:16:51.647131 kubelet[2517]: I0128 06:16:51.646477 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:16:51.647131 kubelet[2517]: I0128 06:16:51.646658 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 06:16:51.647131 kubelet[2517]: I0128 06:16:51.646682 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:16:51.647131 kubelet[2517]: I0128 06:16:51.646695 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:16:51.647131 kubelet[2517]: I0128 06:16:51.646711 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:16:51.648532 kubelet[2517]: I0128 06:16:51.647089 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:16:51.648532 kubelet[2517]: I0128 06:16:51.647114 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:16:51.648532 kubelet[2517]: I0128 06:16:51.647130 2517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:16:51.651382 kubelet[2517]: E0128 06:16:51.650768 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:51.746023 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 06:16:51.752717 kubelet[2517]: E0128 06:16:51.752531 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:51.753841 kubelet[2517]: E0128 06:16:51.753465 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:51.756416 containerd[1608]: time="2026-01-28T06:16:51.756116070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 06:16:51.791591 kubelet[2517]: W0128 06:16:51.791079 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:51.791591 kubelet[2517]: E0128 06:16:51.791464 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:51.845421 kubelet[2517]: I0128 06:16:51.844913 2517 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:16:51.847041 kubelet[2517]: E0128 06:16:51.846746 2517 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 28 06:16:51.888384 kubelet[2517]: E0128 06:16:51.886745 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:51.896351 containerd[1608]: time="2026-01-28T06:16:51.894766328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b468431ede66e7319c2543fbcd5789c0,Namespace:kube-system,Attempt:0,}" Jan 28 06:16:51.956369 kubelet[2517]: E0128 06:16:51.955681 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:51.959676 containerd[1608]: time="2026-01-28T06:16:51.959630617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 06:16:51.965339 containerd[1608]: time="2026-01-28T06:16:51.964963306Z" level=info msg="connecting to shim bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627" address="unix:///run/containerd/s/6ecb8c09c32c5c48f869e685f448ac5e0fb8c176ae0adf7e83f54c3050c541ec" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:16:52.066827 systemd[1]: Started cri-containerd-bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627.scope - libcontainer container bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627. Jan 28 06:16:52.089538 containerd[1608]: time="2026-01-28T06:16:52.087022694Z" level=info msg="connecting to shim 36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933" address="unix:///run/containerd/s/f5cd6d70335827e3a8b32a9f0aac086f88923f62727b4cd4121160f6c4cb2c52" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:16:52.143000 audit: BPF prog-id=83 op=LOAD Jan 28 06:16:52.144000 audit: BPF prog-id=84 op=LOAD Jan 28 06:16:52.144000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.144000 audit: BPF prog-id=84 op=UNLOAD Jan 28 06:16:52.144000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.145000 audit: BPF prog-id=85 op=LOAD Jan 28 06:16:52.145000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.145000 audit: BPF prog-id=86 op=LOAD Jan 28 06:16:52.145000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.145000 audit: BPF prog-id=86 op=UNLOAD Jan 28 06:16:52.145000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.145000 audit: BPF prog-id=85 op=UNLOAD Jan 28 06:16:52.145000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.145000 audit: BPF prog-id=87 op=LOAD Jan 28 06:16:52.145000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262373966376632333463663236336232656362623262616433306262 Jan 28 06:16:52.163128 containerd[1608]: time="2026-01-28T06:16:52.162767576Z" level=info msg="connecting to shim ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca" address="unix:///run/containerd/s/d6bfac9efa8dde55c3bddd0d80b41b6f42cc96280347ef404b1827cf0c5602c0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:16:52.256000 kubelet[2517]: I0128 06:16:52.255965 2517 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:16:52.261424 kubelet[2517]: E0128 06:16:52.261308 2517 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 28 06:16:52.316618 systemd[1]: Started cri-containerd-ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca.scope - libcontainer container ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca. Jan 28 06:16:52.324693 kubelet[2517]: W0128 06:16:52.324477 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:52.325467 kubelet[2517]: E0128 06:16:52.324962 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:52.331482 kubelet[2517]: W0128 06:16:52.331391 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:52.331482 kubelet[2517]: E0128 06:16:52.331456 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:52.367049 systemd[1]: Started cri-containerd-36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933.scope - libcontainer container 36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933. Jan 28 06:16:52.411888 containerd[1608]: time="2026-01-28T06:16:52.410066337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627\"" Jan 28 06:16:52.459096 kubelet[2517]: E0128 06:16:52.458801 2517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" Jan 28 06:16:52.527000 audit: BPF prog-id=88 op=LOAD Jan 28 06:16:52.533000 audit: BPF prog-id=89 op=LOAD Jan 28 06:16:52.533000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.534000 audit: BPF prog-id=89 op=UNLOAD Jan 28 06:16:52.534000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.536620 kubelet[2517]: E0128 06:16:52.535333 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:52.535000 audit: BPF prog-id=90 op=LOAD Jan 28 06:16:52.535000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.536000 audit: BPF prog-id=91 op=LOAD Jan 28 06:16:52.536000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.536000 audit: BPF prog-id=91 op=UNLOAD Jan 28 06:16:52.536000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.537000 audit: BPF prog-id=90 op=UNLOAD Jan 28 06:16:52.537000 audit[2631]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.537000 audit: BPF prog-id=92 op=LOAD Jan 28 06:16:52.537000 audit[2631]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2611 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356261643438323531306633623234353233623962616461306132 Jan 28 06:16:52.575000 audit: BPF prog-id=93 op=LOAD Jan 28 06:16:52.577000 audit: BPF prog-id=94 op=LOAD Jan 28 06:16:52.577000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.577000 audit: BPF prog-id=94 op=UNLOAD Jan 28 06:16:52.577000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.577000 audit: BPF prog-id=95 op=LOAD Jan 28 06:16:52.577000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.577000 audit: BPF prog-id=96 op=LOAD Jan 28 06:16:52.577000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.577000 audit: BPF prog-id=96 op=UNLOAD Jan 28 06:16:52.577000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.578000 audit: BPF prog-id=95 op=UNLOAD Jan 28 06:16:52.578000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.578000 audit: BPF prog-id=97 op=LOAD Jan 28 06:16:52.578000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:52.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336613634396632346465636439336537356562666430623666626666 Jan 28 06:16:52.583318 containerd[1608]: time="2026-01-28T06:16:52.582037763Z" level=info msg="CreateContainer within sandbox \"bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 06:16:52.610968 containerd[1608]: time="2026-01-28T06:16:52.610858662Z" level=info msg="Container 243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:16:52.625924 containerd[1608]: time="2026-01-28T06:16:52.625747583Z" level=info msg="CreateContainer within sandbox \"bb79f7f234cf263b2ecbb2bad30bb80f95393f5bbefd6fac3c6f7ad34de1b627\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5\"" Jan 28 06:16:52.630721 containerd[1608]: time="2026-01-28T06:16:52.630591103Z" level=info msg="StartContainer for \"243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5\"" Jan 28 06:16:52.634027 containerd[1608]: time="2026-01-28T06:16:52.633940484Z" level=info msg="connecting to shim 243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5" address="unix:///run/containerd/s/6ecb8c09c32c5c48f869e685f448ac5e0fb8c176ae0adf7e83f54c3050c541ec" protocol=ttrpc version=3 Jan 28 06:16:52.722330 containerd[1608]: time="2026-01-28T06:16:52.721505980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b468431ede66e7319c2543fbcd5789c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca\"" Jan 28 06:16:52.725416 kubelet[2517]: E0128 06:16:52.725013 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:52.731867 containerd[1608]: time="2026-01-28T06:16:52.731695582Z" level=info msg="CreateContainer within sandbox \"ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 06:16:53.086939 kubelet[2517]: E0128 06:16:53.078862 2517 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:53.095363 kubelet[2517]: W0128 06:16:53.094754 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:53.095363 kubelet[2517]: E0128 06:16:53.094941 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:53.125414 kubelet[2517]: I0128 06:16:53.125030 2517 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:16:53.126551 kubelet[2517]: E0128 06:16:53.126429 2517 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 28 06:16:53.147844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243594331.mount: Deactivated successfully. Jan 28 06:16:53.150649 containerd[1608]: time="2026-01-28T06:16:53.148806981Z" level=info msg="Container ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:16:53.177990 containerd[1608]: time="2026-01-28T06:16:53.177859113Z" level=info msg="CreateContainer within sandbox \"ac5bad482510f3b24523b9bada0a2652e7874c9b92e208126d140ca72e6d2eca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c\"" Jan 28 06:16:53.184997 containerd[1608]: time="2026-01-28T06:16:53.184436059Z" level=info msg="StartContainer for \"ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c\"" Jan 28 06:16:53.186679 containerd[1608]: time="2026-01-28T06:16:53.186390522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933\"" Jan 28 06:16:53.191544 kubelet[2517]: E0128 06:16:53.191383 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:53.197073 containerd[1608]: time="2026-01-28T06:16:53.196573392Z" level=info msg="connecting to shim ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c" address="unix:///run/containerd/s/d6bfac9efa8dde55c3bddd0d80b41b6f42cc96280347ef404b1827cf0c5602c0" protocol=ttrpc version=3 Jan 28 06:16:53.202694 containerd[1608]: time="2026-01-28T06:16:53.202585272Z" level=info msg="CreateContainer within sandbox \"36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 06:16:53.226512 systemd[1]: Started cri-containerd-243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5.scope - libcontainer container 243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5. Jan 28 06:16:53.262398 containerd[1608]: time="2026-01-28T06:16:53.261913064Z" level=info msg="Container dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:16:53.300386 containerd[1608]: time="2026-01-28T06:16:53.299391668Z" level=info msg="CreateContainer within sandbox \"36a649f24decd93e75ebfd0b6fbffff54a06b2f097aeda6a784d4457220ce933\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48\"" Jan 28 06:16:53.302070 containerd[1608]: time="2026-01-28T06:16:53.301718533Z" level=info msg="StartContainer for \"dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48\"" Jan 28 06:16:53.316015 containerd[1608]: time="2026-01-28T06:16:53.315026055Z" level=info msg="connecting to shim dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48" address="unix:///run/containerd/s/f5cd6d70335827e3a8b32a9f0aac086f88923f62727b4cd4121160f6c4cb2c52" protocol=ttrpc version=3 Jan 28 06:16:53.336757 systemd[1]: Started cri-containerd-ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c.scope - libcontainer container ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c. Jan 28 06:16:53.343000 audit: BPF prog-id=98 op=LOAD Jan 28 06:16:53.345000 audit: BPF prog-id=99 op=LOAD Jan 28 06:16:53.345000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.345000 audit: BPF prog-id=99 op=UNLOAD Jan 28 06:16:53.345000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.349000 audit: BPF prog-id=100 op=LOAD Jan 28 06:16:53.349000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.349000 audit: BPF prog-id=101 op=LOAD Jan 28 06:16:53.349000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.349000 audit: BPF prog-id=101 op=UNLOAD Jan 28 06:16:53.349000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.349000 audit: BPF prog-id=100 op=UNLOAD Jan 28 06:16:53.349000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.349000 audit: BPF prog-id=102 op=LOAD Jan 28 06:16:53.349000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2561 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234333033356134386632633363336135373538303233663530326431 Jan 28 06:16:53.459000 audit: BPF prog-id=103 op=LOAD Jan 28 06:16:53.461000 audit: BPF prog-id=104 op=LOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=104 op=UNLOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=105 op=LOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=106 op=LOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=106 op=UNLOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=105 op=UNLOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.461000 audit: BPF prog-id=107 op=LOAD Jan 28 06:16:53.461000 audit[2700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2611 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165346263653561373265386131306563363630623766326661333637 Jan 28 06:16:53.523523 systemd[1]: Started cri-containerd-dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48.scope - libcontainer container dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48. Jan 28 06:16:53.529986 containerd[1608]: time="2026-01-28T06:16:53.529790270Z" level=info msg="StartContainer for \"243035a48f2c3c3a5758023f502d157a525f4c46880520a2865e028d518316d5\" returns successfully" Jan 28 06:16:53.607000 audit: BPF prog-id=108 op=LOAD Jan 28 06:16:53.608000 audit: BPF prog-id=109 op=LOAD Jan 28 06:16:53.608000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.609000 audit: BPF prog-id=109 op=UNLOAD Jan 28 06:16:53.609000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.609000 audit: BPF prog-id=110 op=LOAD Jan 28 06:16:53.609000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.609000 audit: BPF prog-id=111 op=LOAD Jan 28 06:16:53.609000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.609000 audit: BPF prog-id=111 op=UNLOAD Jan 28 06:16:53.609000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.609000 audit: BPF prog-id=110 op=UNLOAD Jan 28 06:16:53.609000 audit[2719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.610000 audit: BPF prog-id=112 op=LOAD Jan 28 06:16:53.610000 audit[2719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2591 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:16:53.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663135343564373761343135663061396232363365306238373866 Jan 28 06:16:53.624764 kubelet[2517]: E0128 06:16:53.624738 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:53.626549 kubelet[2517]: E0128 06:16:53.625746 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:53.686614 containerd[1608]: time="2026-01-28T06:16:53.686554225Z" level=info msg="StartContainer for \"ae4bce5a72e8a10ec660b7f2fa367cf04c44b8e6603c5711d969516e72bfcb1c\" returns successfully" Jan 28 06:16:53.874365 containerd[1608]: time="2026-01-28T06:16:53.872967771Z" level=info msg="StartContainer for \"dff1545d77a415f0a9b263e0b878f804f3614f4313fea3639b9d088111ef0a48\" returns successfully" Jan 28 06:16:53.876523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793012719.mount: Deactivated successfully. Jan 28 06:16:53.977998 kubelet[2517]: W0128 06:16:53.977042 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:53.977998 kubelet[2517]: E0128 06:16:53.977530 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:54.067803 kubelet[2517]: E0128 06:16:54.066798 2517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="3.2s" Jan 28 06:16:54.132016 kubelet[2517]: W0128 06:16:54.131188 2517 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.25:6443: connect: connection refused Jan 28 06:16:54.132016 kubelet[2517]: E0128 06:16:54.131620 2517 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:16:54.675023 kubelet[2517]: E0128 06:16:54.674574 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:54.676761 kubelet[2517]: E0128 06:16:54.675397 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:54.689070 kubelet[2517]: E0128 06:16:54.688529 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:54.689070 kubelet[2517]: E0128 06:16:54.688625 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:54.689413 kubelet[2517]: E0128 06:16:54.689396 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:54.689538 kubelet[2517]: E0128 06:16:54.689525 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:54.735551 kubelet[2517]: I0128 06:16:54.735434 2517 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:16:55.726862 kubelet[2517]: E0128 06:16:55.725949 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:55.726862 kubelet[2517]: E0128 06:16:55.726956 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:55.726862 kubelet[2517]: E0128 06:16:55.727055 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:55.729051 kubelet[2517]: E0128 06:16:55.727829 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:55.731414 kubelet[2517]: E0128 06:16:55.729436 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:55.735783 kubelet[2517]: E0128 06:16:55.734163 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:56.778528 kubelet[2517]: E0128 06:16:56.777576 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:56.778528 kubelet[2517]: E0128 06:16:56.777896 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:57.766835 kubelet[2517]: E0128 06:16:57.766422 2517 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 06:16:57.766835 kubelet[2517]: E0128 06:16:57.766737 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:16:59.756189 kubelet[2517]: E0128 06:16:59.755730 2517 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 06:16:59.929612 kubelet[2517]: I0128 06:16:59.928739 2517 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 06:16:59.929612 kubelet[2517]: E0128 06:16:59.928811 2517 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 06:16:59.981470 kubelet[2517]: I0128 06:16:59.981359 2517 apiserver.go:52] "Watching apiserver" Jan 28 06:17:00.000015 kubelet[2517]: I0128 06:16:59.999989 2517 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:17:00.001341 kubelet[2517]: I0128 06:17:00.000509 2517 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:00.010871 kubelet[2517]: E0128 06:17:00.009947 2517 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:00.011049 kubelet[2517]: I0128 06:17:00.011033 2517 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:00.015718 kubelet[2517]: E0128 06:17:00.014962 2517 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:00.015718 kubelet[2517]: I0128 06:17:00.015005 2517 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:00.018809 kubelet[2517]: E0128 06:17:00.018701 2517 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:01.992169 kubelet[2517]: I0128 06:17:01.991730 2517 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:02.011005 kubelet[2517]: E0128 06:17:02.010648 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:02.801503 kubelet[2517]: E0128 06:17:02.800540 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:03.383905 kubelet[2517]: I0128 06:17:03.383428 2517 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:03.401713 kubelet[2517]: E0128 06:17:03.400817 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:03.466786 systemd[1]: Reload requested from client PID 2790 ('systemctl') (unit session-8.scope)... Jan 28 06:17:03.466955 systemd[1]: Reloading... Jan 28 06:17:03.579696 kubelet[2517]: I0128 06:17:03.579509 2517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.5793644650000003 podStartE2EDuration="2.579364465s" podCreationTimestamp="2026-01-28 06:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:17:03.547558962 +0000 UTC m=+13.613001780" watchObservedRunningTime="2026-01-28 06:17:03.579364465 +0000 UTC m=+13.644807283" Jan 28 06:17:03.581734 kubelet[2517]: I0128 06:17:03.579747 2517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.579738918 podStartE2EDuration="579.738918ms" podCreationTimestamp="2026-01-28 06:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:17:03.57647204 +0000 UTC m=+13.641914858" watchObservedRunningTime="2026-01-28 06:17:03.579738918 +0000 UTC m=+13.645181736" Jan 28 06:17:03.643430 zram_generator::config[2839]: No configuration found. Jan 28 06:17:03.810800 kubelet[2517]: E0128 06:17:03.810619 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:04.123782 systemd[1]: Reloading finished in 656 ms. Jan 28 06:17:04.199844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:17:04.222858 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 06:17:04.224157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:17:04.224780 systemd[1]: kubelet.service: Consumed 4.338s CPU time, 133.2M memory peak. Jan 28 06:17:04.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:17:04.230805 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 28 06:17:04.230884 kernel: audit: type=1131 audit(1769581024.223:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:17:04.251936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 06:17:04.252000 audit: BPF prog-id=113 op=LOAD Jan 28 06:17:04.252000 audit: BPF prog-id=66 op=UNLOAD Jan 28 06:17:04.254000 audit: BPF prog-id=114 op=LOAD Jan 28 06:17:04.254000 audit: BPF prog-id=115 op=LOAD Jan 28 06:17:04.254000 audit: BPF prog-id=67 op=UNLOAD Jan 28 06:17:04.254000 audit: BPF prog-id=68 op=UNLOAD Jan 28 06:17:04.257000 audit: BPF prog-id=116 op=LOAD Jan 28 06:17:04.270336 kernel: audit: type=1334 audit(1769581024.252:402): prog-id=113 op=LOAD Jan 28 06:17:04.270374 kernel: audit: type=1334 audit(1769581024.252:403): prog-id=66 op=UNLOAD Jan 28 06:17:04.270391 kernel: audit: type=1334 audit(1769581024.254:404): prog-id=114 op=LOAD Jan 28 06:17:04.270414 kernel: audit: type=1334 audit(1769581024.254:405): prog-id=115 op=LOAD Jan 28 06:17:04.270486 kernel: audit: type=1334 audit(1769581024.254:406): prog-id=67 op=UNLOAD Jan 28 06:17:04.270503 kernel: audit: type=1334 audit(1769581024.254:407): prog-id=68 op=UNLOAD Jan 28 06:17:04.270519 kernel: audit: type=1334 audit(1769581024.257:408): prog-id=116 op=LOAD Jan 28 06:17:04.257000 audit: BPF prog-id=82 op=UNLOAD Jan 28 06:17:04.260000 audit: BPF prog-id=117 op=LOAD Jan 28 06:17:04.336178 kernel: audit: type=1334 audit(1769581024.257:409): prog-id=82 op=UNLOAD Jan 28 06:17:04.336612 kernel: audit: type=1334 audit(1769581024.260:410): prog-id=117 op=LOAD Jan 28 06:17:04.260000 audit: BPF prog-id=81 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=118 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=72 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=119 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=120 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=73 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=74 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=121 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=69 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=122 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=123 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=70 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=71 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=124 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=125 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=79 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=80 op=UNLOAD Jan 28 06:17:04.261000 audit: BPF prog-id=126 op=LOAD Jan 28 06:17:04.261000 audit: BPF prog-id=75 op=UNLOAD Jan 28 06:17:04.269000 audit: BPF prog-id=127 op=LOAD Jan 28 06:17:04.269000 audit: BPF prog-id=63 op=UNLOAD Jan 28 06:17:04.269000 audit: BPF prog-id=128 op=LOAD Jan 28 06:17:04.269000 audit: BPF prog-id=129 op=LOAD Jan 28 06:17:04.269000 audit: BPF prog-id=64 op=UNLOAD Jan 28 06:17:04.269000 audit: BPF prog-id=65 op=UNLOAD Jan 28 06:17:04.275000 audit: BPF prog-id=130 op=LOAD Jan 28 06:17:04.275000 audit: BPF prog-id=76 op=UNLOAD Jan 28 06:17:04.275000 audit: BPF prog-id=131 op=LOAD Jan 28 06:17:04.275000 audit: BPF prog-id=132 op=LOAD Jan 28 06:17:04.275000 audit: BPF prog-id=77 op=UNLOAD Jan 28 06:17:04.275000 audit: BPF prog-id=78 op=UNLOAD Jan 28 06:17:04.743572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 06:17:04.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:17:04.757750 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 06:17:04.958833 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:17:04.958833 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 06:17:04.958833 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:17:04.959570 kubelet[2882]: I0128 06:17:04.958863 2882 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:17:04.976161 kubelet[2882]: I0128 06:17:04.975979 2882 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 06:17:04.976161 kubelet[2882]: I0128 06:17:04.976160 2882 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:17:04.976710 kubelet[2882]: I0128 06:17:04.976601 2882 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 06:17:04.978611 kubelet[2882]: I0128 06:17:04.978507 2882 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 06:17:04.989360 kubelet[2882]: I0128 06:17:04.988769 2882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 06:17:05.038829 kubelet[2882]: I0128 06:17:05.037899 2882 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:17:05.051335 kubelet[2882]: I0128 06:17:05.050808 2882 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 06:17:05.051335 kubelet[2882]: I0128 06:17:05.051164 2882 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:17:05.052085 kubelet[2882]: I0128 06:17:05.051328 2882 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:17:05.052085 kubelet[2882]: I0128 06:17:05.051786 2882 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:17:05.052085 kubelet[2882]: I0128 06:17:05.051797 2882 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 06:17:05.052085 kubelet[2882]: I0128 06:17:05.051850 2882 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:17:05.052933 kubelet[2882]: I0128 06:17:05.052399 2882 kubelet.go:446] "Attempting to sync node with API server" Jan 28 06:17:05.052933 kubelet[2882]: I0128 06:17:05.052426 2882 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:17:05.052933 kubelet[2882]: I0128 06:17:05.052447 2882 kubelet.go:352] "Adding apiserver pod source" Jan 28 06:17:05.052933 kubelet[2882]: I0128 06:17:05.052457 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:17:05.068641 kubelet[2882]: I0128 06:17:05.068368 2882 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 06:17:05.072322 kubelet[2882]: I0128 06:17:05.071511 2882 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 06:17:05.076930 kubelet[2882]: I0128 06:17:05.076458 2882 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 06:17:05.076930 kubelet[2882]: I0128 06:17:05.076567 2882 server.go:1287] "Started kubelet" Jan 28 06:17:05.095998 kubelet[2882]: I0128 06:17:05.095653 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:17:05.095998 kubelet[2882]: I0128 06:17:05.096778 2882 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:17:05.100012 kubelet[2882]: I0128 06:17:05.099857 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:17:05.103406 kubelet[2882]: I0128 06:17:05.102996 2882 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:17:05.113152 kubelet[2882]: I0128 06:17:05.108929 2882 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 06:17:05.113152 kubelet[2882]: I0128 06:17:05.109982 2882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 06:17:05.113152 kubelet[2882]: I0128 06:17:05.112833 2882 server.go:479] "Adding debug handlers to kubelet server" Jan 28 06:17:05.114359 kubelet[2882]: I0128 06:17:05.113560 2882 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:17:05.117433 kubelet[2882]: I0128 06:17:05.117132 2882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 06:17:05.124870 kubelet[2882]: I0128 06:17:05.124637 2882 factory.go:221] Registration of the systemd container factory successfully Jan 28 06:17:05.132942 kubelet[2882]: I0128 06:17:05.132877 2882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 06:17:05.138527 kubelet[2882]: E0128 06:17:05.137992 2882 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 06:17:05.141941 kubelet[2882]: I0128 06:17:05.141681 2882 factory.go:221] Registration of the containerd container factory successfully Jan 28 06:17:05.243623 kubelet[2882]: I0128 06:17:05.242549 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 06:17:05.272696 kubelet[2882]: I0128 06:17:05.271157 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 06:17:05.272696 kubelet[2882]: I0128 06:17:05.271918 2882 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 06:17:05.272696 kubelet[2882]: I0128 06:17:05.271954 2882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 06:17:05.272696 kubelet[2882]: I0128 06:17:05.272100 2882 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 06:17:05.282703 kubelet[2882]: E0128 06:17:05.281656 2882 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:17:05.383590 kubelet[2882]: E0128 06:17:05.383168 2882 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397481 2882 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397506 2882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397616 2882 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397848 2882 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397859 2882 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397874 2882 policy_none.go:49] "None policy: Start" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397883 2882 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 06:17:05.397771 kubelet[2882]: I0128 06:17:05.397894 2882 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:17:05.398360 kubelet[2882]: I0128 06:17:05.398163 2882 state_mem.go:75] "Updated machine memory state" Jan 28 06:17:05.423909 kubelet[2882]: I0128 06:17:05.423746 2882 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 06:17:05.424499 kubelet[2882]: I0128 06:17:05.424160 2882 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:17:05.424499 kubelet[2882]: I0128 06:17:05.424178 2882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:17:05.425279 kubelet[2882]: I0128 06:17:05.424979 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:17:05.432387 kubelet[2882]: E0128 06:17:05.430815 2882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 06:17:05.570405 kubelet[2882]: I0128 06:17:05.569982 2882 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 06:17:05.585965 kubelet[2882]: I0128 06:17:05.585654 2882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:05.587524 kubelet[2882]: I0128 06:17:05.586381 2882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:05.587524 kubelet[2882]: I0128 06:17:05.586708 2882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.602448 kubelet[2882]: I0128 06:17:05.602396 2882 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 06:17:05.602533 kubelet[2882]: I0128 06:17:05.602487 2882 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 06:17:05.621472 kubelet[2882]: E0128 06:17:05.619506 2882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.624473 kubelet[2882]: E0128 06:17:05.621941 2882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:05.640605 kubelet[2882]: I0128 06:17:05.640419 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.640605 kubelet[2882]: I0128 06:17:05.640447 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 06:17:05.640605 kubelet[2882]: I0128 06:17:05.640465 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:05.640605 kubelet[2882]: I0128 06:17:05.640478 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:05.640605 kubelet[2882]: I0128 06:17:05.640493 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.640829 kubelet[2882]: I0128 06:17:05.640508 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.640829 kubelet[2882]: I0128 06:17:05.640522 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.640829 kubelet[2882]: I0128 06:17:05.640536 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b468431ede66e7319c2543fbcd5789c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b468431ede66e7319c2543fbcd5789c0\") " pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:05.640829 kubelet[2882]: I0128 06:17:05.640549 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 06:17:05.921584 kubelet[2882]: E0128 06:17:05.920359 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:05.921584 kubelet[2882]: E0128 06:17:05.920183 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:05.922949 kubelet[2882]: E0128 06:17:05.922341 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:06.077156 kubelet[2882]: I0128 06:17:06.074172 2882 apiserver.go:52] "Watching apiserver" Jan 28 06:17:06.212118 kubelet[2882]: I0128 06:17:06.211154 2882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 06:17:06.390412 kubelet[2882]: I0128 06:17:06.389755 2882 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:06.392890 kubelet[2882]: E0128 06:17:06.392592 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:06.395147 kubelet[2882]: E0128 06:17:06.394380 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:06.428394 kubelet[2882]: E0128 06:17:06.427891 2882 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 06:17:06.429469 kubelet[2882]: E0128 06:17:06.429357 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:06.538711 kubelet[2882]: I0128 06:17:06.537668 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.53717451 podStartE2EDuration="1.53717451s" podCreationTimestamp="2026-01-28 06:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:17:06.492878296 +0000 UTC m=+1.675144832" watchObservedRunningTime="2026-01-28 06:17:06.53717451 +0000 UTC m=+1.719441035" Jan 28 06:17:07.399619 kubelet[2882]: E0128 06:17:07.398770 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:07.403816 kubelet[2882]: E0128 06:17:07.401686 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:09.411392 kubelet[2882]: I0128 06:17:09.410850 2882 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 06:17:09.415824 containerd[1608]: time="2026-01-28T06:17:09.415628291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 06:17:09.421480 kubelet[2882]: I0128 06:17:09.420428 2882 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 06:17:09.720651 systemd[1]: Created slice kubepods-besteffort-pod63e5783b_45a3_40a9_8fc9_9008513c629a.slice - libcontainer container kubepods-besteffort-pod63e5783b_45a3_40a9_8fc9_9008513c629a.slice. Jan 28 06:17:09.874311 kubelet[2882]: I0128 06:17:09.873139 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63e5783b-45a3-40a9-8fc9-9008513c629a-xtables-lock\") pod \"kube-proxy-rxhp9\" (UID: \"63e5783b-45a3-40a9-8fc9-9008513c629a\") " pod="kube-system/kube-proxy-rxhp9" Jan 28 06:17:09.874720 kubelet[2882]: I0128 06:17:09.874557 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63e5783b-45a3-40a9-8fc9-9008513c629a-lib-modules\") pod \"kube-proxy-rxhp9\" (UID: \"63e5783b-45a3-40a9-8fc9-9008513c629a\") " pod="kube-system/kube-proxy-rxhp9" Jan 28 06:17:09.875540 kubelet[2882]: I0128 06:17:09.874685 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6rs\" (UniqueName: \"kubernetes.io/projected/63e5783b-45a3-40a9-8fc9-9008513c629a-kube-api-access-lv6rs\") pod \"kube-proxy-rxhp9\" (UID: \"63e5783b-45a3-40a9-8fc9-9008513c629a\") " pod="kube-system/kube-proxy-rxhp9" Jan 28 06:17:09.875540 kubelet[2882]: I0128 06:17:09.874808 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63e5783b-45a3-40a9-8fc9-9008513c629a-kube-proxy\") pod \"kube-proxy-rxhp9\" (UID: \"63e5783b-45a3-40a9-8fc9-9008513c629a\") " pod="kube-system/kube-proxy-rxhp9" Jan 28 06:17:09.988736 systemd[1]: Created slice kubepods-besteffort-pod40755b70_a6be_42cb_b2e1_18b164d80123.slice - libcontainer container kubepods-besteffort-pod40755b70_a6be_42cb_b2e1_18b164d80123.slice. Jan 28 06:17:10.070947 kubelet[2882]: E0128 06:17:10.070398 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:10.080625 kubelet[2882]: I0128 06:17:10.080424 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40755b70-a6be-42cb-b2e1-18b164d80123-var-lib-calico\") pod \"tigera-operator-7dcd859c48-plbsg\" (UID: \"40755b70-a6be-42cb-b2e1-18b164d80123\") " pod="tigera-operator/tigera-operator-7dcd859c48-plbsg" Jan 28 06:17:10.081509 kubelet[2882]: I0128 06:17:10.081399 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nw8b\" (UniqueName: \"kubernetes.io/projected/40755b70-a6be-42cb-b2e1-18b164d80123-kube-api-access-6nw8b\") pod \"tigera-operator-7dcd859c48-plbsg\" (UID: \"40755b70-a6be-42cb-b2e1-18b164d80123\") " pod="tigera-operator/tigera-operator-7dcd859c48-plbsg" Jan 28 06:17:10.299506 containerd[1608]: time="2026-01-28T06:17:10.298824217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-plbsg,Uid:40755b70-a6be-42cb-b2e1-18b164d80123,Namespace:tigera-operator,Attempt:0,}" Jan 28 06:17:10.352295 kubelet[2882]: E0128 06:17:10.350178 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:10.354495 containerd[1608]: time="2026-01-28T06:17:10.354446327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxhp9,Uid:63e5783b-45a3-40a9-8fc9-9008513c629a,Namespace:kube-system,Attempt:0,}" Jan 28 06:17:10.429418 containerd[1608]: time="2026-01-28T06:17:10.427655750Z" level=info msg="connecting to shim 28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8" address="unix:///run/containerd/s/4e242aecc5b5d0ea53a0e07ebf93e089a204fd97b4d46aa10433936ce9a5123e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:17:10.476460 containerd[1608]: time="2026-01-28T06:17:10.476179323Z" level=info msg="connecting to shim d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757" address="unix:///run/containerd/s/d7b91ea334299609a261916fa00827fbdd6b278a4bd624017752ea6ed1a0cf99" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:17:10.483579 kubelet[2882]: E0128 06:17:10.483131 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:10.621476 systemd[1]: Started cri-containerd-28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8.scope - libcontainer container 28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8. Jan 28 06:17:10.639881 systemd[1]: Started cri-containerd-d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757.scope - libcontainer container d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757. Jan 28 06:17:10.695000 audit: BPF prog-id=133 op=LOAD Jan 28 06:17:10.705705 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 06:17:10.705859 kernel: audit: type=1334 audit(1769581030.695:443): prog-id=133 op=LOAD Jan 28 06:17:10.700000 audit: BPF prog-id=134 op=LOAD Jan 28 06:17:10.732167 kernel: audit: type=1334 audit(1769581030.700:444): prog-id=134 op=LOAD Jan 28 06:17:10.700000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.793545 kernel: audit: type=1300 audit(1769581030.700:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.793646 kernel: audit: type=1327 audit(1769581030.700:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.793679 kernel: audit: type=1334 audit(1769581030.700:445): prog-id=134 op=UNLOAD Jan 28 06:17:10.700000 audit: BPF prog-id=134 op=UNLOAD Jan 28 06:17:10.700000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.832751 kernel: audit: type=1300 audit(1769581030.700:445): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.834388 kernel: audit: type=1327 audit(1769581030.700:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.859795 kernel: audit: type=1334 audit(1769581030.701:446): prog-id=135 op=LOAD Jan 28 06:17:10.701000 audit: BPF prog-id=135 op=LOAD Jan 28 06:17:10.870601 kernel: audit: type=1300 audit(1769581030.701:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.701000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.903368 containerd[1608]: time="2026-01-28T06:17:10.902949832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxhp9,Uid:63e5783b-45a3-40a9-8fc9-9008513c629a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757\"" Jan 28 06:17:10.933725 kubelet[2882]: E0128 06:17:10.918388 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:10.933883 kernel: audit: type=1327 audit(1769581030.701:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.933941 containerd[1608]: time="2026-01-28T06:17:10.927866772Z" level=info msg="CreateContainer within sandbox \"d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 06:17:10.702000 audit: BPF prog-id=136 op=LOAD Jan 28 06:17:10.702000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.702000 audit: BPF prog-id=136 op=UNLOAD Jan 28 06:17:10.702000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.702000 audit: BPF prog-id=135 op=UNLOAD Jan 28 06:17:10.702000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.702000 audit: BPF prog-id=137 op=LOAD Jan 28 06:17:10.702000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2959 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396564336635323763656434343537306366303936333266333561 Jan 28 06:17:10.703000 audit: BPF prog-id=138 op=LOAD Jan 28 06:17:10.709000 audit: BPF prog-id=139 op=LOAD Jan 28 06:17:10.709000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.709000 audit: BPF prog-id=139 op=UNLOAD Jan 28 06:17:10.709000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.709000 audit: BPF prog-id=140 op=LOAD Jan 28 06:17:10.709000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.709000 audit: BPF prog-id=141 op=LOAD Jan 28 06:17:10.709000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.709000 audit: BPF prog-id=141 op=UNLOAD Jan 28 06:17:10.709000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.710000 audit: BPF prog-id=140 op=UNLOAD Jan 28 06:17:10.710000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.710000 audit: BPF prog-id=142 op=LOAD Jan 28 06:17:10.710000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2945 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:10.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238623363656334393138313130383335626365323134626231646439 Jan 28 06:17:10.978760 containerd[1608]: time="2026-01-28T06:17:10.978495235Z" level=info msg="Container 55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:17:10.999779 containerd[1608]: time="2026-01-28T06:17:10.999652296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-plbsg,Uid:40755b70-a6be-42cb-b2e1-18b164d80123,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8\"" Jan 28 06:17:11.014170 containerd[1608]: time="2026-01-28T06:17:11.014129791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 06:17:11.026905 containerd[1608]: time="2026-01-28T06:17:11.026764777Z" level=info msg="CreateContainer within sandbox \"d69ed3f527ced44570cf09632f35ae0a39e8b3db8661409e25a4a88c4f116757\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b\"" Jan 28 06:17:11.030470 containerd[1608]: time="2026-01-28T06:17:11.030171310Z" level=info msg="StartContainer for \"55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b\"" Jan 28 06:17:11.038160 containerd[1608]: time="2026-01-28T06:17:11.037536605Z" level=info msg="connecting to shim 55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b" address="unix:///run/containerd/s/d7b91ea334299609a261916fa00827fbdd6b278a4bd624017752ea6ed1a0cf99" protocol=ttrpc version=3 Jan 28 06:17:11.194859 systemd[1]: Started cri-containerd-55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b.scope - libcontainer container 55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b. Jan 28 06:17:11.293000 audit: BPF prog-id=143 op=LOAD Jan 28 06:17:11.293000 audit[3027]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2959 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:11.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535656633316363373335616332353237336566343735326332326138 Jan 28 06:17:11.293000 audit: BPF prog-id=144 op=LOAD Jan 28 06:17:11.293000 audit[3027]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2959 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:11.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535656633316363373335616332353237336566343735326332326138 Jan 28 06:17:11.293000 audit: BPF prog-id=144 op=UNLOAD Jan 28 06:17:11.293000 audit[3027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:11.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535656633316363373335616332353237336566343735326332326138 Jan 28 06:17:11.294000 audit: BPF prog-id=143 op=UNLOAD Jan 28 06:17:11.294000 audit[3027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2959 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:11.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535656633316363373335616332353237336566343735326332326138 Jan 28 06:17:11.294000 audit: BPF prog-id=145 op=LOAD Jan 28 06:17:11.294000 audit[3027]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2959 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:11.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535656633316363373335616332353237336566343735326332326138 Jan 28 06:17:11.353466 containerd[1608]: time="2026-01-28T06:17:11.353172342Z" level=info msg="StartContainer for \"55ef31cc735ac25273ef4752c22a81018e18a8a32c57a5f3018fa48593a0532b\" returns successfully" Jan 28 06:17:11.495438 kubelet[2882]: E0128 06:17:11.492941 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:11.524885 kubelet[2882]: E0128 06:17:11.524133 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:11.869614 kubelet[2882]: E0128 06:17:11.867097 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:11.995407 kubelet[2882]: I0128 06:17:11.995168 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxhp9" podStartSLOduration=2.995135168 podStartE2EDuration="2.995135168s" podCreationTimestamp="2026-01-28 06:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:17:11.576132729 +0000 UTC m=+6.758399255" watchObservedRunningTime="2026-01-28 06:17:11.995135168 +0000 UTC m=+7.177401694" Jan 28 06:17:12.186965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693908625.mount: Deactivated successfully. Jan 28 06:17:12.517000 audit[3098]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.517000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3ec66780 a2=0 a3=7ffc3ec6676c items=0 ppid=3039 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 06:17:12.520000 audit[3099]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:12.520000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe52cf57a0 a2=0 a3=7ffe52cf578c items=0 ppid=3039 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 06:17:12.529000 audit[3100]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:12.529000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0d337550 a2=0 a3=7fff0d33753c items=0 ppid=3039 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 06:17:12.538000 audit[3103]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:12.538000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb87ff0e0 a2=0 a3=7ffeb87ff0cc items=0 ppid=3039 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 06:17:12.543282 kubelet[2882]: E0128 06:17:12.542920 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:12.564000 audit[3102]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.564000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebf0e75f0 a2=0 a3=7ffebf0e75dc items=0 ppid=3039 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 06:17:12.576000 audit[3105]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.576000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe7a6a2e0 a2=0 a3=7fffe7a6a2cc items=0 ppid=3039 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 06:17:12.691000 audit[3106]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.691000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc096e3a20 a2=0 a3=7ffc096e3a0c items=0 ppid=3039 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 06:17:12.722000 audit[3108]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.722000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffe2836260 a2=0 a3=7fffe283624c items=0 ppid=3039 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 06:17:12.755000 audit[3111]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.755000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce9286150 a2=0 a3=7ffce928613c items=0 ppid=3039 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 06:17:12.766000 audit[3112]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.766000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff850d7650 a2=0 a3=7fff850d763c items=0 ppid=3039 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 06:17:12.785000 audit[3114]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.785000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc5cbcae0 a2=0 a3=7ffcc5cbcacc items=0 ppid=3039 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 06:17:12.794000 audit[3115]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.794000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe89447990 a2=0 a3=7ffe8944797c items=0 ppid=3039 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 06:17:12.814000 audit[3117]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.814000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeec7189a0 a2=0 a3=7ffeec71898c items=0 ppid=3039 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 06:17:12.838000 audit[3120]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.838000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc159a4e20 a2=0 a3=7ffc159a4e0c items=0 ppid=3039 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 06:17:12.844000 audit[3121]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.844000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5e001a00 a2=0 a3=7ffd5e0019ec items=0 ppid=3039 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 06:17:12.857000 audit[3123]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.857000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe681aef60 a2=0 a3=7ffe681aef4c items=0 ppid=3039 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 06:17:12.866000 audit[3124]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.866000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd375d6580 a2=0 a3=7ffd375d656c items=0 ppid=3039 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 06:17:12.880000 audit[3126]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.880000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe51305220 a2=0 a3=7ffe5130520c items=0 ppid=3039 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:17:12.904000 audit[3129]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.904000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef2161ab0 a2=0 a3=7ffef2161a9c items=0 ppid=3039 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:17:12.926000 audit[3132]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.926000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd5415c180 a2=0 a3=7ffd5415c16c items=0 ppid=3039 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 06:17:12.934000 audit[3133]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.934000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3c5a2f90 a2=0 a3=7ffd3c5a2f7c items=0 ppid=3039 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 06:17:12.951000 audit[3135]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.951000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff65e985c0 a2=0 a3=7fff65e985ac items=0 ppid=3039 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:17:12.979000 audit[3138]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.979000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6c0b9270 a2=0 a3=7fff6c0b925c items=0 ppid=3039 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:17:12.987000 audit[3139]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:12.987000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdac092670 a2=0 a3=7ffdac09265c items=0 ppid=3039 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:12.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 06:17:13.002000 audit[3141]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 06:17:13.002000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd0803bee0 a2=0 a3=7ffd0803becc items=0 ppid=3039 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 06:17:13.100000 audit[3147]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:13.100000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe9ad74220 a2=0 a3=7ffe9ad7420c items=0 ppid=3039 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:13.122000 audit[3147]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:13.122000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe9ad74220 a2=0 a3=7ffe9ad7420c items=0 ppid=3039 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:13.129000 audit[3152]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.129000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffea5591790 a2=0 a3=7ffea559177c items=0 ppid=3039 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 06:17:13.145000 audit[3154]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.145000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe519eecd0 a2=0 a3=7ffe519eecbc items=0 ppid=3039 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 06:17:13.164000 audit[3157]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.164000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff8c29b170 a2=0 a3=7fff8c29b15c items=0 ppid=3039 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 06:17:13.170000 audit[3158]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.170000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2e29740 a2=0 a3=7ffdc2e2972c items=0 ppid=3039 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 06:17:13.183000 audit[3160]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.183000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1a3dbb70 a2=0 a3=7ffd1a3dbb5c items=0 ppid=3039 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 06:17:13.188000 audit[3161]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.188000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc62eb1160 a2=0 a3=7ffc62eb114c items=0 ppid=3039 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 06:17:13.201000 audit[3163]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.201000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffda83083d0 a2=0 a3=7ffda83083bc items=0 ppid=3039 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 06:17:13.225000 audit[3166]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.225000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff9c1dba80 a2=0 a3=7fff9c1dba6c items=0 ppid=3039 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 06:17:13.231000 audit[3167]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.231000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb41c0e00 a2=0 a3=7fffb41c0dec items=0 ppid=3039 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 06:17:13.249000 audit[3169]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.249000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffced9510b0 a2=0 a3=7ffced95109c items=0 ppid=3039 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 06:17:13.256000 audit[3170]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.256000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd62f1e6e0 a2=0 a3=7ffd62f1e6cc items=0 ppid=3039 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 06:17:13.269000 audit[3172]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.269000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeda05eb20 a2=0 a3=7ffeda05eb0c items=0 ppid=3039 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 06:17:13.295000 audit[3175]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.295000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc0250cb0 a2=0 a3=7fffc0250c9c items=0 ppid=3039 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 06:17:13.315000 audit[3178]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.315000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1b945910 a2=0 a3=7ffe1b9458fc items=0 ppid=3039 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 06:17:13.323000 audit[3179]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.323000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3374e1d0 a2=0 a3=7ffd3374e1bc items=0 ppid=3039 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 06:17:13.337000 audit[3181]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.337000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffffd5f930 a2=0 a3=7fffffd5f91c items=0 ppid=3039 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:17:13.359000 audit[3184]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.359000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfda838c0 a2=0 a3=7ffdfda838ac items=0 ppid=3039 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 06:17:13.367000 audit[3185]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.367000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0bcce720 a2=0 a3=7ffd0bcce70c items=0 ppid=3039 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 06:17:13.380000 audit[3187]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.380000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff9005c520 a2=0 a3=7fff9005c50c items=0 ppid=3039 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 06:17:13.388000 audit[3188]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.388000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcee414f0 a2=0 a3=7fffcee414dc items=0 ppid=3039 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 06:17:13.403000 audit[3190]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.403000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd08430090 a2=0 a3=7ffd0843007c items=0 ppid=3039 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:17:13.431000 audit[3193]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 06:17:13.431000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffded4c58e0 a2=0 a3=7ffded4c58cc items=0 ppid=3039 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 06:17:13.449000 audit[3195]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 06:17:13.449000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc0d945da0 a2=0 a3=7ffc0d945d8c items=0 ppid=3039 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.449000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:13.450000 audit[3195]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 06:17:13.450000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc0d945da0 a2=0 a3=7ffc0d945d8c items=0 ppid=3039 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:13.450000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:13.776356 kubelet[2882]: E0128 06:17:13.774443 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:14.381496 containerd[1608]: time="2026-01-28T06:17:14.380963858Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:14.384707 containerd[1608]: time="2026-01-28T06:17:14.384336142Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 28 06:17:14.385895 containerd[1608]: time="2026-01-28T06:17:14.385834942Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:14.389864 containerd[1608]: time="2026-01-28T06:17:14.389674753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:14.390360 containerd[1608]: time="2026-01-28T06:17:14.390172164Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.375476935s" Jan 28 06:17:14.390809 containerd[1608]: time="2026-01-28T06:17:14.390451081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 06:17:14.397407 containerd[1608]: time="2026-01-28T06:17:14.395583781Z" level=info msg="CreateContainer within sandbox \"28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 06:17:14.439699 containerd[1608]: time="2026-01-28T06:17:14.439585735Z" level=info msg="Container 31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:17:14.455553 containerd[1608]: time="2026-01-28T06:17:14.455370405Z" level=info msg="CreateContainer within sandbox \"28b3cec4918110835bce214bb1dd90381d4e471f524e0f85277e21c85c374fb8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf\"" Jan 28 06:17:14.456858 containerd[1608]: time="2026-01-28T06:17:14.456829142Z" level=info msg="StartContainer for \"31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf\"" Jan 28 06:17:14.459830 containerd[1608]: time="2026-01-28T06:17:14.459801895Z" level=info msg="connecting to shim 31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf" address="unix:///run/containerd/s/4e242aecc5b5d0ea53a0e07ebf93e089a204fd97b4d46aa10433936ce9a5123e" protocol=ttrpc version=3 Jan 28 06:17:14.549437 kubelet[2882]: E0128 06:17:14.548509 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:14.567119 systemd[1]: Started cri-containerd-31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf.scope - libcontainer container 31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf. Jan 28 06:17:14.619000 audit: BPF prog-id=146 op=LOAD Jan 28 06:17:14.624000 audit: BPF prog-id=147 op=LOAD Jan 28 06:17:14.624000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.624000 audit: BPF prog-id=147 op=UNLOAD Jan 28 06:17:14.624000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.624000 audit: BPF prog-id=148 op=LOAD Jan 28 06:17:14.624000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.624000 audit: BPF prog-id=149 op=LOAD Jan 28 06:17:14.624000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.625000 audit: BPF prog-id=149 op=UNLOAD Jan 28 06:17:14.625000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.625000 audit: BPF prog-id=148 op=UNLOAD Jan 28 06:17:14.625000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.625000 audit: BPF prog-id=150 op=LOAD Jan 28 06:17:14.625000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2945 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:14.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331343935626634643935356232653964386634613733396161663132 Jan 28 06:17:14.725463 containerd[1608]: time="2026-01-28T06:17:14.723482081Z" level=info msg="StartContainer for \"31495bf4d955b2e9d8f4a739aaf128fc169a82af59ff86dbc6782453e3ae6faf\" returns successfully" Jan 28 06:17:25.262585 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 28 06:17:25.291141 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 06:17:25.291386 kernel: audit: type=1106 audit(1769581045.261:523): pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:17:25.261000 audit[1833]: USER_END pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:17:25.302370 sshd[1832]: Connection closed by 10.0.0.1 port 36982 Jan 28 06:17:25.301580 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Jan 28 06:17:25.261000 audit[1833]: CRED_DISP pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:17:25.322668 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 28 06:17:25.377430 kernel: audit: type=1104 audit(1769581045.261:524): pid=1833 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 06:17:25.382449 kernel: audit: type=1106 audit(1769581045.309:525): pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:17:25.309000 audit[1828]: USER_END pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:17:25.309000 audit[1828]: CRED_DISP pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:17:25.335905 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:36982.service: Deactivated successfully. Jan 28 06:17:25.366923 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 06:17:25.378131 systemd[1]: session-8.scope: Consumed 19.826s CPU time, 216.3M memory peak. Jan 28 06:17:25.391673 systemd-logind[1580]: Removed session 8. Jan 28 06:17:25.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.25:22-10.0.0.1:36982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:17:25.450551 kernel: audit: type=1104 audit(1769581045.309:526): pid=1828 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:17:25.450693 kernel: audit: type=1131 audit(1769581045.336:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.25:22-10.0.0.1:36982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:17:26.475000 audit[3294]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.498759 kernel: audit: type=1325 audit(1769581046.475:528): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.475000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe953ddaf0 a2=0 a3=7ffe953ddadc items=0 ppid=3039 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:26.567049 kernel: audit: type=1300 audit(1769581046.475:528): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe953ddaf0 a2=0 a3=7ffe953ddadc items=0 ppid=3039 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.567123 kernel: audit: type=1327 audit(1769581046.475:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:26.567146 kernel: audit: type=1325 audit(1769581046.512:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.512000 audit[3294]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.588069 kernel: audit: type=1300 audit(1769581046.512:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe953ddaf0 a2=0 a3=0 items=0 ppid=3039 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.512000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe953ddaf0 a2=0 a3=0 items=0 ppid=3039 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:26.729000 audit[3296]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.729000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdae166240 a2=0 a3=7ffdae16622c items=0 ppid=3039 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:26.737000 audit[3296]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:26.737000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdae166240 a2=0 a3=0 items=0 ppid=3039 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:26.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.427308 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 06:17:35.427451 kernel: audit: type=1325 audit(1769581055.393:532): table=filter:109 family=2 entries=16 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.393000 audit[3298]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.393000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb1a250b0 a2=0 a3=7ffeb1a2509c items=0 ppid=3039 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.471386 kernel: audit: type=1300 audit(1769581055.393:532): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb1a250b0 a2=0 a3=7ffeb1a2509c items=0 ppid=3039 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.471483 kernel: audit: type=1327 audit(1769581055.393:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.501000 audit[3298]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.501000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb1a250b0 a2=0 a3=0 items=0 ppid=3039 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.565925 kernel: audit: type=1325 audit(1769581055.501:533): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.567147 kernel: audit: type=1300 audit(1769581055.501:533): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb1a250b0 a2=0 a3=0 items=0 ppid=3039 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.567490 kernel: audit: type=1327 audit(1769581055.501:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.632000 audit[3300]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.656750 kernel: audit: type=1325 audit(1769581055.632:534): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.632000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc06f64d70 a2=0 a3=7ffc06f64d5c items=0 ppid=3039 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.719565 kernel: audit: type=1300 audit(1769581055.632:534): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc06f64d70 a2=0 a3=7ffc06f64d5c items=0 ppid=3039 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.720810 kernel: audit: type=1327 audit(1769581055.632:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:35.720844 kernel: audit: type=1325 audit(1769581055.684:535): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.684000 audit[3300]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:35.684000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc06f64d70 a2=0 a3=0 items=0 ppid=3039 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:35.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:37.984000 audit[3303]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:37.984000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe862924a0 a2=0 a3=7ffe8629248c items=0 ppid=3039 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:37.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:37.989000 audit[3303]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:37.989000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe862924a0 a2=0 a3=0 items=0 ppid=3039 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:37.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:38.050629 kubelet[2882]: I0128 06:17:38.050557 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-plbsg" podStartSLOduration=25.660484478 podStartE2EDuration="29.050527045s" podCreationTimestamp="2026-01-28 06:17:09 +0000 UTC" firstStartedPulling="2026-01-28 06:17:11.002677865 +0000 UTC m=+6.184944391" lastFinishedPulling="2026-01-28 06:17:14.392720432 +0000 UTC m=+9.574986958" observedRunningTime="2026-01-28 06:17:15.58211237 +0000 UTC m=+10.764378896" watchObservedRunningTime="2026-01-28 06:17:38.050527045 +0000 UTC m=+33.232793571" Jan 28 06:17:38.052000 audit[3305]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:38.052000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca645dd70 a2=0 a3=7ffca645dd5c items=0 ppid=3039 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:38.059000 audit[3305]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:38.059000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca645dd70 a2=0 a3=0 items=0 ppid=3039 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:38.088734 systemd[1]: Created slice kubepods-besteffort-podb461c9b8_9534_4330_b645_dd787708f3ad.slice - libcontainer container kubepods-besteffort-podb461c9b8_9534_4330_b645_dd787708f3ad.slice. Jan 28 06:17:38.098877 kubelet[2882]: I0128 06:17:38.098728 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b461c9b8-9534-4330-b645-dd787708f3ad-tigera-ca-bundle\") pod \"calico-typha-65757db8d4-fhwdt\" (UID: \"b461c9b8-9534-4330-b645-dd787708f3ad\") " pod="calico-system/calico-typha-65757db8d4-fhwdt" Jan 28 06:17:38.099160 kubelet[2882]: I0128 06:17:38.099137 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs8ks\" (UniqueName: \"kubernetes.io/projected/b461c9b8-9534-4330-b645-dd787708f3ad-kube-api-access-zs8ks\") pod \"calico-typha-65757db8d4-fhwdt\" (UID: \"b461c9b8-9534-4330-b645-dd787708f3ad\") " pod="calico-system/calico-typha-65757db8d4-fhwdt" Jan 28 06:17:38.099494 kubelet[2882]: I0128 06:17:38.099476 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b461c9b8-9534-4330-b645-dd787708f3ad-typha-certs\") pod \"calico-typha-65757db8d4-fhwdt\" (UID: \"b461c9b8-9534-4330-b645-dd787708f3ad\") " pod="calico-system/calico-typha-65757db8d4-fhwdt" Jan 28 06:17:38.401841 kubelet[2882]: I0128 06:17:38.400581 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-flexvol-driver-host\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.401841 kubelet[2882]: I0128 06:17:38.400621 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-policysync\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.401841 kubelet[2882]: I0128 06:17:38.400637 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-var-run-calico\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.401841 kubelet[2882]: I0128 06:17:38.400651 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-var-lib-calico\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.401841 kubelet[2882]: I0128 06:17:38.400665 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7qd\" (UniqueName: \"kubernetes.io/projected/aa0bba5a-654c-4010-8ff4-040810f7e09d-kube-api-access-8q7qd\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.405796 kubelet[2882]: I0128 06:17:38.400679 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-cni-bin-dir\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.405796 kubelet[2882]: I0128 06:17:38.400691 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa0bba5a-654c-4010-8ff4-040810f7e09d-tigera-ca-bundle\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.405796 kubelet[2882]: I0128 06:17:38.400705 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-xtables-lock\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.405796 kubelet[2882]: I0128 06:17:38.400719 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-cni-log-dir\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.405796 kubelet[2882]: I0128 06:17:38.400731 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-cni-net-dir\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.406130 kubelet[2882]: I0128 06:17:38.400744 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa0bba5a-654c-4010-8ff4-040810f7e09d-lib-modules\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.406130 kubelet[2882]: I0128 06:17:38.400756 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa0bba5a-654c-4010-8ff4-040810f7e09d-node-certs\") pod \"calico-node-5phmn\" (UID: \"aa0bba5a-654c-4010-8ff4-040810f7e09d\") " pod="calico-system/calico-node-5phmn" Jan 28 06:17:38.422583 kubelet[2882]: E0128 06:17:38.421822 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:38.427457 systemd[1]: Created slice kubepods-besteffort-podaa0bba5a_654c_4010_8ff4_040810f7e09d.slice - libcontainer container kubepods-besteffort-podaa0bba5a_654c_4010_8ff4_040810f7e09d.slice. Jan 28 06:17:38.433704 containerd[1608]: time="2026-01-28T06:17:38.433418370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65757db8d4-fhwdt,Uid:b461c9b8-9534-4330-b645-dd787708f3ad,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:38.535364 kubelet[2882]: E0128 06:17:38.532957 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.535364 kubelet[2882]: W0128 06:17:38.533081 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.535364 kubelet[2882]: E0128 06:17:38.534590 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.551451 kubelet[2882]: E0128 06:17:38.550732 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.551451 kubelet[2882]: W0128 06:17:38.550756 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.551451 kubelet[2882]: E0128 06:17:38.550784 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.591470 kubelet[2882]: E0128 06:17:38.591431 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.593947 kubelet[2882]: W0128 06:17:38.593920 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.594426 kubelet[2882]: E0128 06:17:38.594402 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.600809 kubelet[2882]: E0128 06:17:38.600771 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:38.605666 kubelet[2882]: E0128 06:17:38.605643 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.605766 kubelet[2882]: W0128 06:17:38.605749 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.605864 kubelet[2882]: E0128 06:17:38.605849 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.611565 kubelet[2882]: E0128 06:17:38.611416 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.611565 kubelet[2882]: W0128 06:17:38.611435 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.611565 kubelet[2882]: E0128 06:17:38.611456 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.612673 containerd[1608]: time="2026-01-28T06:17:38.612522907Z" level=info msg="connecting to shim 6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c" address="unix:///run/containerd/s/506bdc3fa9d0dbe407f9546d2619940e30298653bbb9513838616978fbe20777" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:17:38.613918 kubelet[2882]: E0128 06:17:38.613901 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.614517 kubelet[2882]: W0128 06:17:38.614128 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.614517 kubelet[2882]: E0128 06:17:38.614153 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.617114 kubelet[2882]: E0128 06:17:38.617094 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.619745 kubelet[2882]: W0128 06:17:38.619725 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.619840 kubelet[2882]: E0128 06:17:38.619826 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.622866 kubelet[2882]: E0128 06:17:38.622850 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.623669 kubelet[2882]: W0128 06:17:38.622949 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.623800 kubelet[2882]: E0128 06:17:38.623783 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.627127 kubelet[2882]: E0128 06:17:38.627109 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.628688 kubelet[2882]: W0128 06:17:38.628550 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.628784 kubelet[2882]: E0128 06:17:38.628769 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.629187 kubelet[2882]: E0128 06:17:38.629172 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.629187 kubelet[2882]: W0128 06:17:38.629766 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.629187 kubelet[2882]: E0128 06:17:38.629783 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.630680 kubelet[2882]: E0128 06:17:38.630664 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.630761 kubelet[2882]: W0128 06:17:38.630748 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.630823 kubelet[2882]: E0128 06:17:38.630811 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.632570 kubelet[2882]: E0128 06:17:38.632553 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.632640 kubelet[2882]: W0128 06:17:38.632627 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.632713 kubelet[2882]: E0128 06:17:38.632699 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.640541 kubelet[2882]: E0128 06:17:38.639593 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.640541 kubelet[2882]: W0128 06:17:38.639814 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.640541 kubelet[2882]: E0128 06:17:38.639829 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.653429 kubelet[2882]: E0128 06:17:38.652718 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.653429 kubelet[2882]: W0128 06:17:38.652824 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.653429 kubelet[2882]: E0128 06:17:38.652840 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.656746 kubelet[2882]: E0128 06:17:38.655724 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.656746 kubelet[2882]: W0128 06:17:38.655845 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.656746 kubelet[2882]: E0128 06:17:38.655861 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.663170 kubelet[2882]: E0128 06:17:38.662611 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.663170 kubelet[2882]: W0128 06:17:38.662720 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.666401 kubelet[2882]: E0128 06:17:38.665450 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.669591 kubelet[2882]: I0128 06:17:38.668557 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66-socket-dir\") pod \"csi-node-driver-bwxrt\" (UID: \"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66\") " pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:38.669591 kubelet[2882]: E0128 06:17:38.669398 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.669591 kubelet[2882]: W0128 06:17:38.669412 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.669591 kubelet[2882]: E0128 06:17:38.669426 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.681884 kubelet[2882]: E0128 06:17:38.681412 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.681884 kubelet[2882]: W0128 06:17:38.681522 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.681884 kubelet[2882]: E0128 06:17:38.681641 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.682444 kubelet[2882]: E0128 06:17:38.682102 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.682444 kubelet[2882]: W0128 06:17:38.682113 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.682444 kubelet[2882]: E0128 06:17:38.682392 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.683712 kubelet[2882]: E0128 06:17:38.683591 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.683712 kubelet[2882]: W0128 06:17:38.683685 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.684751 kubelet[2882]: E0128 06:17:38.684625 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.684751 kubelet[2882]: I0128 06:17:38.684749 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66-registration-dir\") pod \"csi-node-driver-bwxrt\" (UID: \"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66\") " pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:38.686312 kubelet[2882]: E0128 06:17:38.685787 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.686312 kubelet[2882]: W0128 06:17:38.685895 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.689367 kubelet[2882]: E0128 06:17:38.688855 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.693947 kubelet[2882]: E0128 06:17:38.692927 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.693947 kubelet[2882]: W0128 06:17:38.693133 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.698574 kubelet[2882]: E0128 06:17:38.698378 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.698650 kubelet[2882]: E0128 06:17:38.698643 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.698689 kubelet[2882]: W0128 06:17:38.698652 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.699643 kubelet[2882]: E0128 06:17:38.699516 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.699643 kubelet[2882]: W0128 06:17:38.699622 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.699643 kubelet[2882]: E0128 06:17:38.699639 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.701143 kubelet[2882]: E0128 06:17:38.700131 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.702536 kubelet[2882]: E0128 06:17:38.701603 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.702536 kubelet[2882]: W0128 06:17:38.701710 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.704127 kubelet[2882]: E0128 06:17:38.703970 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.704693 kubelet[2882]: E0128 06:17:38.704181 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.704693 kubelet[2882]: W0128 06:17:38.704190 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.705696 kubelet[2882]: E0128 06:17:38.705145 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.709531 kubelet[2882]: I0128 06:17:38.709407 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66-kubelet-dir\") pod \"csi-node-driver-bwxrt\" (UID: \"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66\") " pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:38.709598 kubelet[2882]: E0128 06:17:38.709566 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.709598 kubelet[2882]: W0128 06:17:38.709574 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.709598 kubelet[2882]: E0128 06:17:38.709587 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.710699 kubelet[2882]: E0128 06:17:38.710520 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.710699 kubelet[2882]: W0128 06:17:38.710532 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.711521 kubelet[2882]: E0128 06:17:38.711399 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.711887 kubelet[2882]: E0128 06:17:38.711763 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.711887 kubelet[2882]: W0128 06:17:38.711875 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.712855 kubelet[2882]: E0128 06:17:38.712727 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.714645 kubelet[2882]: E0128 06:17:38.714505 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.714645 kubelet[2882]: W0128 06:17:38.714633 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.714748 kubelet[2882]: E0128 06:17:38.714660 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.715515 kubelet[2882]: E0128 06:17:38.715396 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.715515 kubelet[2882]: W0128 06:17:38.715505 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.716620 kubelet[2882]: E0128 06:17:38.716128 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.718886 kubelet[2882]: E0128 06:17:38.718409 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.718886 kubelet[2882]: W0128 06:17:38.718854 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.718886 kubelet[2882]: E0128 06:17:38.718866 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.735534 kubelet[2882]: E0128 06:17:38.734941 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:38.738805 containerd[1608]: time="2026-01-28T06:17:38.738559828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5phmn,Uid:aa0bba5a-654c-4010-8ff4-040810f7e09d,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:38.824782 kubelet[2882]: E0128 06:17:38.824668 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.824782 kubelet[2882]: W0128 06:17:38.824703 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.824782 kubelet[2882]: E0128 06:17:38.824737 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.827498 kubelet[2882]: E0128 06:17:38.826432 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.827498 kubelet[2882]: W0128 06:17:38.826444 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.827498 kubelet[2882]: E0128 06:17:38.826461 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.828532 kubelet[2882]: E0128 06:17:38.828403 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.828532 kubelet[2882]: W0128 06:17:38.828514 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.828532 kubelet[2882]: E0128 06:17:38.828529 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.828640 kubelet[2882]: I0128 06:17:38.828553 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28vl\" (UniqueName: \"kubernetes.io/projected/8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66-kube-api-access-h28vl\") pod \"csi-node-driver-bwxrt\" (UID: \"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66\") " pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:38.831570 kubelet[2882]: E0128 06:17:38.831527 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.831570 kubelet[2882]: W0128 06:17:38.831540 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.831570 kubelet[2882]: E0128 06:17:38.831552 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.831677 kubelet[2882]: I0128 06:17:38.831574 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66-varrun\") pod \"csi-node-driver-bwxrt\" (UID: \"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66\") " pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:38.832745 kubelet[2882]: E0128 06:17:38.832607 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.832745 kubelet[2882]: W0128 06:17:38.832712 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.833562 kubelet[2882]: E0128 06:17:38.833172 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.833924 kubelet[2882]: E0128 06:17:38.833906 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.834097 kubelet[2882]: W0128 06:17:38.833971 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.834744 kubelet[2882]: E0128 06:17:38.834685 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.835845 kubelet[2882]: E0128 06:17:38.835830 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.837498 kubelet[2882]: W0128 06:17:38.836816 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.837498 kubelet[2882]: E0128 06:17:38.836974 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.837943 kubelet[2882]: E0128 06:17:38.837929 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.838112 kubelet[2882]: W0128 06:17:38.838098 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.840826 kubelet[2882]: E0128 06:17:38.840581 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.840826 kubelet[2882]: E0128 06:17:38.840637 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.840826 kubelet[2882]: W0128 06:17:38.840645 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.840928 kubelet[2882]: E0128 06:17:38.840916 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.842644 kubelet[2882]: E0128 06:17:38.842632 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.842702 kubelet[2882]: W0128 06:17:38.842691 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.844446 kubelet[2882]: E0128 06:17:38.844430 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.844668 kubelet[2882]: E0128 06:17:38.844646 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.844716 kubelet[2882]: W0128 06:17:38.844706 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.846412 kubelet[2882]: E0128 06:17:38.845849 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.846412 kubelet[2882]: E0128 06:17:38.845911 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.846412 kubelet[2882]: W0128 06:17:38.845918 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.846412 kubelet[2882]: E0128 06:17:38.845959 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.847612 kubelet[2882]: E0128 06:17:38.847598 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.847787 kubelet[2882]: W0128 06:17:38.847658 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.848556 kubelet[2882]: E0128 06:17:38.848349 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.850138 kubelet[2882]: E0128 06:17:38.850124 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.850397 kubelet[2882]: W0128 06:17:38.850186 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.855754 kubelet[2882]: E0128 06:17:38.855448 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.856668 kubelet[2882]: E0128 06:17:38.856655 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.856725 kubelet[2882]: W0128 06:17:38.856715 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.858553 kubelet[2882]: E0128 06:17:38.858538 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.860091 kubelet[2882]: E0128 06:17:38.859612 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.860091 kubelet[2882]: W0128 06:17:38.859626 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.860853 kubelet[2882]: E0128 06:17:38.860744 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.863837 kubelet[2882]: E0128 06:17:38.863637 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.863837 kubelet[2882]: W0128 06:17:38.863753 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.863932 kubelet[2882]: E0128 06:17:38.863919 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.864666 kubelet[2882]: E0128 06:17:38.864128 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.864666 kubelet[2882]: W0128 06:17:38.864530 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.865155 kubelet[2882]: E0128 06:17:38.865139 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.866183 kubelet[2882]: E0128 06:17:38.865661 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.866183 kubelet[2882]: W0128 06:17:38.865818 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.867166 kubelet[2882]: E0128 06:17:38.866687 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.869726 kubelet[2882]: E0128 06:17:38.869554 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.869726 kubelet[2882]: W0128 06:17:38.869566 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.869726 kubelet[2882]: E0128 06:17:38.869576 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.870705 kubelet[2882]: E0128 06:17:38.870663 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.870705 kubelet[2882]: W0128 06:17:38.870675 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.870705 kubelet[2882]: E0128 06:17:38.870684 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.894462 systemd[1]: Started cri-containerd-6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c.scope - libcontainer container 6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c. Jan 28 06:17:38.922719 containerd[1608]: time="2026-01-28T06:17:38.922146379Z" level=info msg="connecting to shim aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca" address="unix:///run/containerd/s/794632d418f8dca7a3cb2e2fa171eb92f62da211456f38855dfb8c0e6e10d5d1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:17:38.936481 kubelet[2882]: E0128 06:17:38.935405 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.936481 kubelet[2882]: W0128 06:17:38.935426 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.936481 kubelet[2882]: E0128 06:17:38.935446 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.940074 kubelet[2882]: E0128 06:17:38.939668 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.940074 kubelet[2882]: W0128 06:17:38.939771 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.940950 kubelet[2882]: E0128 06:17:38.940617 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.944962 kubelet[2882]: E0128 06:17:38.944424 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.944962 kubelet[2882]: W0128 06:17:38.944523 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.944962 kubelet[2882]: E0128 06:17:38.944880 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.945970 kubelet[2882]: E0128 06:17:38.945844 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.945970 kubelet[2882]: W0128 06:17:38.945941 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.947346 kubelet[2882]: E0128 06:17:38.947116 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.950826 kubelet[2882]: E0128 06:17:38.950687 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.950826 kubelet[2882]: W0128 06:17:38.950699 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.950826 kubelet[2882]: E0128 06:17:38.950710 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.954453 kubelet[2882]: E0128 06:17:38.952594 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.954453 kubelet[2882]: W0128 06:17:38.952612 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.954453 kubelet[2882]: E0128 06:17:38.952628 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.954453 kubelet[2882]: E0128 06:17:38.953908 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.954453 kubelet[2882]: W0128 06:17:38.953918 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.955183 kubelet[2882]: E0128 06:17:38.954856 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.959494 kubelet[2882]: E0128 06:17:38.958433 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.959494 kubelet[2882]: W0128 06:17:38.958444 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.959494 kubelet[2882]: E0128 06:17:38.958455 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.959494 kubelet[2882]: E0128 06:17:38.959485 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.959611 kubelet[2882]: W0128 06:17:38.959498 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.959611 kubelet[2882]: E0128 06:17:38.959514 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.960933 kubelet[2882]: E0128 06:17:38.959789 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:38.960933 kubelet[2882]: W0128 06:17:38.959912 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:38.960933 kubelet[2882]: E0128 06:17:38.959928 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:38.971000 audit: BPF prog-id=151 op=LOAD Jan 28 06:17:38.973000 audit: BPF prog-id=152 op=LOAD Jan 28 06:17:38.973000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.975000 audit: BPF prog-id=152 op=UNLOAD Jan 28 06:17:38.975000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.977000 audit: BPF prog-id=153 op=LOAD Jan 28 06:17:38.977000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.977000 audit: BPF prog-id=154 op=LOAD Jan 28 06:17:38.977000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.977000 audit: BPF prog-id=154 op=UNLOAD Jan 28 06:17:38.977000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.977000 audit: BPF prog-id=153 op=UNLOAD Jan 28 06:17:38.977000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:38.977000 audit: BPF prog-id=155 op=LOAD Jan 28 06:17:38.977000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3323 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:38.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623233653163323662373538313437643436396163393834343635 Jan 28 06:17:39.017487 kubelet[2882]: E0128 06:17:39.016910 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:39.017487 kubelet[2882]: W0128 06:17:39.017135 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:39.017487 kubelet[2882]: E0128 06:17:39.017159 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:39.157498 systemd[1]: Started cri-containerd-aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca.scope - libcontainer container aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca. Jan 28 06:17:39.158449 containerd[1608]: time="2026-01-28T06:17:39.157960069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65757db8d4-fhwdt,Uid:b461c9b8-9534-4330-b645-dd787708f3ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c\"" Jan 28 06:17:39.157000 audit[3462]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:39.157000 audit[3462]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd225b1100 a2=0 a3=7ffd225b10ec items=0 ppid=3039 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:39.167884 kubelet[2882]: E0128 06:17:39.167732 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:39.170000 audit[3462]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:39.172663 containerd[1608]: time="2026-01-28T06:17:39.171774101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 06:17:39.170000 audit[3462]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd225b1100 a2=0 a3=0 items=0 ppid=3039 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:39.240000 audit: BPF prog-id=156 op=LOAD Jan 28 06:17:39.245000 audit: BPF prog-id=157 op=LOAD Jan 28 06:17:39.245000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.247000 audit: BPF prog-id=157 op=UNLOAD Jan 28 06:17:39.247000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.249000 audit: BPF prog-id=158 op=LOAD Jan 28 06:17:39.249000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.249000 audit: BPF prog-id=159 op=LOAD Jan 28 06:17:39.249000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.249000 audit: BPF prog-id=159 op=UNLOAD Jan 28 06:17:39.249000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.249000 audit: BPF prog-id=158 op=UNLOAD Jan 28 06:17:39.249000 audit[3443]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.250000 audit: BPF prog-id=160 op=LOAD Jan 28 06:17:39.250000 audit[3443]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3414 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:39.250000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313233623030363265633662656335373832313161333735343031 Jan 28 06:17:39.353168 containerd[1608]: time="2026-01-28T06:17:39.352748691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5phmn,Uid:aa0bba5a-654c-4010-8ff4-040810f7e09d,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\"" Jan 28 06:17:39.367701 kubelet[2882]: E0128 06:17:39.367576 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:40.048391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681599277.mount: Deactivated successfully. Jan 28 06:17:40.274086 kubelet[2882]: E0128 06:17:40.273784 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:42.274788 kubelet[2882]: E0128 06:17:42.274168 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:42.804587 containerd[1608]: time="2026-01-28T06:17:42.803705353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:42.812668 containerd[1608]: time="2026-01-28T06:17:42.812110939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 06:17:42.816374 containerd[1608]: time="2026-01-28T06:17:42.814805113Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:42.820452 containerd[1608]: time="2026-01-28T06:17:42.820108636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:42.821384 containerd[1608]: time="2026-01-28T06:17:42.820787169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.648984544s" Jan 28 06:17:42.821384 containerd[1608]: time="2026-01-28T06:17:42.820920557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 06:17:42.827655 containerd[1608]: time="2026-01-28T06:17:42.827630087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 06:17:42.873869 containerd[1608]: time="2026-01-28T06:17:42.873731869Z" level=info msg="CreateContainer within sandbox \"6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 06:17:42.897927 containerd[1608]: time="2026-01-28T06:17:42.897788670Z" level=info msg="Container 85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:17:42.898829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851495028.mount: Deactivated successfully. Jan 28 06:17:42.923543 containerd[1608]: time="2026-01-28T06:17:42.923171545Z" level=info msg="CreateContainer within sandbox \"6ab23e1c26b758147d469ac984465142b2de7739d2567863ee8bce81a429a41c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38\"" Jan 28 06:17:42.933736 containerd[1608]: time="2026-01-28T06:17:42.933523944Z" level=info msg="StartContainer for \"85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38\"" Jan 28 06:17:42.937884 containerd[1608]: time="2026-01-28T06:17:42.937787085Z" level=info msg="connecting to shim 85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38" address="unix:///run/containerd/s/506bdc3fa9d0dbe407f9546d2619940e30298653bbb9513838616978fbe20777" protocol=ttrpc version=3 Jan 28 06:17:42.996446 systemd[1]: Started cri-containerd-85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38.scope - libcontainer container 85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38. Jan 28 06:17:43.070000 audit: BPF prog-id=161 op=LOAD Jan 28 06:17:43.079413 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 28 06:17:43.079499 kernel: audit: type=1334 audit(1769581063.070:558): prog-id=161 op=LOAD Jan 28 06:17:43.090000 audit: BPF prog-id=162 op=LOAD Jan 28 06:17:43.101489 kernel: audit: type=1334 audit(1769581063.090:559): prog-id=162 op=LOAD Jan 28 06:17:43.101560 kernel: audit: type=1300 audit(1769581063.090:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.090000 audit[3489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.174178 kernel: audit: type=1327 audit(1769581063.090:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.174667 kernel: audit: type=1334 audit(1769581063.090:560): prog-id=162 op=UNLOAD Jan 28 06:17:43.090000 audit: BPF prog-id=162 op=UNLOAD Jan 28 06:17:43.090000 audit[3489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.256641 kernel: audit: type=1300 audit(1769581063.090:560): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.256720 kernel: audit: type=1327 audit(1769581063.090:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.090000 audit: BPF prog-id=163 op=LOAD Jan 28 06:17:43.267396 kernel: audit: type=1334 audit(1769581063.090:561): prog-id=163 op=LOAD Jan 28 06:17:43.090000 audit[3489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.307113 kernel: audit: type=1300 audit(1769581063.090:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.090000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.346124 containerd[1608]: time="2026-01-28T06:17:43.344470287Z" level=info msg="StartContainer for \"85dca5b1d8a71fba7b1f08dde1a25d523bcc4d9e707d678e5a2f7596b8c8ea38\" returns successfully" Jan 28 06:17:43.346408 kernel: audit: type=1327 audit(1769581063.090:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.091000 audit: BPF prog-id=164 op=LOAD Jan 28 06:17:43.091000 audit[3489]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.091000 audit: BPF prog-id=164 op=UNLOAD Jan 28 06:17:43.091000 audit[3489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.091000 audit: BPF prog-id=163 op=UNLOAD Jan 28 06:17:43.091000 audit[3489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.091000 audit: BPF prog-id=165 op=LOAD Jan 28 06:17:43.091000 audit[3489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3323 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:43.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835646361356231643861373166626137623166303864646531613235 Jan 28 06:17:43.677518 containerd[1608]: time="2026-01-28T06:17:43.677478605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:43.683524 containerd[1608]: time="2026-01-28T06:17:43.683151221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 06:17:43.694437 containerd[1608]: time="2026-01-28T06:17:43.694387985Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:43.706607 containerd[1608]: time="2026-01-28T06:17:43.703708351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:43.713439 containerd[1608]: time="2026-01-28T06:17:43.712892430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 883.069241ms" Jan 28 06:17:43.713439 containerd[1608]: time="2026-01-28T06:17:43.713146793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 06:17:43.734641 containerd[1608]: time="2026-01-28T06:17:43.734612954Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 06:17:43.758397 kubelet[2882]: E0128 06:17:43.757186 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:43.773301 containerd[1608]: time="2026-01-28T06:17:43.772755475Z" level=info msg="Container db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:17:43.799804 kubelet[2882]: E0128 06:17:43.798845 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.799804 kubelet[2882]: W0128 06:17:43.798869 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.803428 kubelet[2882]: E0128 06:17:43.802860 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.811856 kubelet[2882]: E0128 06:17:43.811403 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.811856 kubelet[2882]: W0128 06:17:43.811519 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.811856 kubelet[2882]: E0128 06:17:43.811547 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.814127 kubelet[2882]: E0128 06:17:43.813972 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.814127 kubelet[2882]: W0128 06:17:43.814113 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.814419 kubelet[2882]: E0128 06:17:43.814139 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.818831 kubelet[2882]: E0128 06:17:43.818414 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.818831 kubelet[2882]: W0128 06:17:43.818435 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.818831 kubelet[2882]: E0128 06:17:43.818454 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.818928 containerd[1608]: time="2026-01-28T06:17:43.818733196Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad\"" Jan 28 06:17:43.827300 containerd[1608]: time="2026-01-28T06:17:43.826672367Z" level=info msg="StartContainer for \"db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad\"" Jan 28 06:17:43.829958 kubelet[2882]: E0128 06:17:43.829827 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.829958 kubelet[2882]: W0128 06:17:43.829942 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.830148 kubelet[2882]: E0128 06:17:43.829964 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.836385 containerd[1608]: time="2026-01-28T06:17:43.835744184Z" level=info msg="connecting to shim db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad" address="unix:///run/containerd/s/794632d418f8dca7a3cb2e2fa171eb92f62da211456f38855dfb8c0e6e10d5d1" protocol=ttrpc version=3 Jan 28 06:17:43.839393 kubelet[2882]: E0128 06:17:43.838952 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.839393 kubelet[2882]: W0128 06:17:43.839162 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.839393 kubelet[2882]: E0128 06:17:43.839178 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.844435 kubelet[2882]: E0128 06:17:43.844135 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.844435 kubelet[2882]: W0128 06:17:43.844173 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.844435 kubelet[2882]: E0128 06:17:43.844190 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.848943 kubelet[2882]: E0128 06:17:43.848506 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.848943 kubelet[2882]: W0128 06:17:43.848605 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.848943 kubelet[2882]: E0128 06:17:43.848620 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.850850 kubelet[2882]: E0128 06:17:43.850693 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.850850 kubelet[2882]: W0128 06:17:43.850711 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.850850 kubelet[2882]: E0128 06:17:43.850734 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.857730 kubelet[2882]: E0128 06:17:43.856966 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.857730 kubelet[2882]: W0128 06:17:43.856981 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.857730 kubelet[2882]: E0128 06:17:43.857113 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.861440 kubelet[2882]: E0128 06:17:43.861425 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.861768 kubelet[2882]: W0128 06:17:43.861753 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.861836 kubelet[2882]: E0128 06:17:43.861825 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.864486 kubelet[2882]: E0128 06:17:43.864431 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.864486 kubelet[2882]: W0128 06:17:43.864445 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.864486 kubelet[2882]: E0128 06:17:43.864461 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.869412 kubelet[2882]: E0128 06:17:43.868589 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.869412 kubelet[2882]: W0128 06:17:43.868700 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.869412 kubelet[2882]: E0128 06:17:43.868722 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.869412 kubelet[2882]: E0128 06:17:43.868951 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.869412 kubelet[2882]: W0128 06:17:43.868960 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.869412 kubelet[2882]: E0128 06:17:43.868968 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.869677 kubelet[2882]: E0128 06:17:43.869616 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.869677 kubelet[2882]: W0128 06:17:43.869625 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.869677 kubelet[2882]: E0128 06:17:43.869637 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.966826 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.973620 kubelet[2882]: W0128 06:17:43.966953 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.967569 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.968791 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.973620 kubelet[2882]: W0128 06:17:43.968802 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.968826 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.972450 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.973620 kubelet[2882]: W0128 06:17:43.972461 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.973620 kubelet[2882]: E0128 06:17:43.972816 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.978519 kubelet[2882]: E0128 06:17:43.978134 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.979587 kubelet[2882]: W0128 06:17:43.978903 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.980989 kubelet[2882]: E0128 06:17:43.980638 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.982974 kubelet[2882]: E0128 06:17:43.982892 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.982974 kubelet[2882]: W0128 06:17:43.982910 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.982974 kubelet[2882]: E0128 06:17:43.982927 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.985663 kubelet[2882]: E0128 06:17:43.985143 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.987575 kubelet[2882]: W0128 06:17:43.987176 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.988186 kubelet[2882]: E0128 06:17:43.987916 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.990443 kubelet[2882]: E0128 06:17:43.989501 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.990443 kubelet[2882]: W0128 06:17:43.989521 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.990443 kubelet[2882]: E0128 06:17:43.989597 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.992683 kubelet[2882]: E0128 06:17:43.992552 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.992683 kubelet[2882]: W0128 06:17:43.992567 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.993549 kubelet[2882]: E0128 06:17:43.992925 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.997582 kubelet[2882]: E0128 06:17:43.996861 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.997582 kubelet[2882]: W0128 06:17:43.996875 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:43.999577 kubelet[2882]: E0128 06:17:43.999449 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:43.999899 kubelet[2882]: E0128 06:17:43.999769 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:43.999899 kubelet[2882]: W0128 06:17:43.999876 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.002161 kubelet[2882]: E0128 06:17:44.001739 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.001882 systemd[1]: Started cri-containerd-db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad.scope - libcontainer container db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad. Jan 28 06:17:44.008511 kubelet[2882]: E0128 06:17:44.002564 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.008511 kubelet[2882]: W0128 06:17:44.002576 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.008511 kubelet[2882]: E0128 06:17:44.002596 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.009749 kubelet[2882]: E0128 06:17:44.008957 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.009749 kubelet[2882]: W0128 06:17:44.008976 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.010951 kubelet[2882]: E0128 06:17:44.010568 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.023368 kubelet[2882]: E0128 06:17:44.022875 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.023368 kubelet[2882]: W0128 06:17:44.022997 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.023368 kubelet[2882]: E0128 06:17:44.023128 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.030948 kubelet[2882]: E0128 06:17:44.030501 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.030948 kubelet[2882]: W0128 06:17:44.030518 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.030948 kubelet[2882]: E0128 06:17:44.030938 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.031611 kubelet[2882]: E0128 06:17:44.031475 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.031611 kubelet[2882]: W0128 06:17:44.031485 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.033551 kubelet[2882]: E0128 06:17:44.031891 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.033551 kubelet[2882]: E0128 06:17:44.032900 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.033551 kubelet[2882]: W0128 06:17:44.032910 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.033551 kubelet[2882]: E0128 06:17:44.032924 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.036822 kubelet[2882]: E0128 06:17:44.036635 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.036822 kubelet[2882]: W0128 06:17:44.036647 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.036822 kubelet[2882]: E0128 06:17:44.036661 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.037614 kubelet[2882]: E0128 06:17:44.037595 2882 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 06:17:44.037614 kubelet[2882]: W0128 06:17:44.037609 2882 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 06:17:44.037713 kubelet[2882]: E0128 06:17:44.037619 2882 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 06:17:44.227000 audit: BPF prog-id=166 op=LOAD Jan 28 06:17:44.227000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3414 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353935353164663063363463316533333233346635373334333065 Jan 28 06:17:44.227000 audit: BPF prog-id=167 op=LOAD Jan 28 06:17:44.227000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3414 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353935353164663063363463316533333233346635373334333065 Jan 28 06:17:44.227000 audit: BPF prog-id=167 op=UNLOAD Jan 28 06:17:44.227000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.227000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353935353164663063363463316533333233346635373334333065 Jan 28 06:17:44.228000 audit: BPF prog-id=166 op=UNLOAD Jan 28 06:17:44.228000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353935353164663063363463316533333233346635373334333065 Jan 28 06:17:44.228000 audit: BPF prog-id=168 op=LOAD Jan 28 06:17:44.228000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3414 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462353935353164663063363463316533333233346635373334333065 Jan 28 06:17:44.273907 kubelet[2882]: E0128 06:17:44.273441 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:44.317841 containerd[1608]: time="2026-01-28T06:17:44.317784848Z" level=info msg="StartContainer for \"db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad\" returns successfully" Jan 28 06:17:44.364817 systemd[1]: cri-containerd-db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad.scope: Deactivated successfully. Jan 28 06:17:44.372000 audit: BPF prog-id=168 op=UNLOAD Jan 28 06:17:44.379561 containerd[1608]: time="2026-01-28T06:17:44.379395582Z" level=info msg="received container exit event container_id:\"db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad\" id:\"db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad\" pid:3579 exited_at:{seconds:1769581064 nanos:377849906}" Jan 28 06:17:44.499554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db59551df0c64c1e33234f573430eee9e69c804fc0801d11facb892da3ecfdad-rootfs.mount: Deactivated successfully. Jan 28 06:17:44.780144 kubelet[2882]: E0128 06:17:44.779762 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:44.799560 kubelet[2882]: E0128 06:17:44.799417 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:44.810736 containerd[1608]: time="2026-01-28T06:17:44.810568959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 06:17:44.845487 kubelet[2882]: I0128 06:17:44.844661 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65757db8d4-fhwdt" podStartSLOduration=4.192651901 podStartE2EDuration="7.844644571s" podCreationTimestamp="2026-01-28 06:17:37 +0000 UTC" firstStartedPulling="2026-01-28 06:17:39.171581904 +0000 UTC m=+34.353848430" lastFinishedPulling="2026-01-28 06:17:42.823574573 +0000 UTC m=+38.005841100" observedRunningTime="2026-01-28 06:17:43.840959614 +0000 UTC m=+39.023226140" watchObservedRunningTime="2026-01-28 06:17:44.844644571 +0000 UTC m=+40.026911097" Jan 28 06:17:44.943000 audit[3620]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:44.943000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1ee3b780 a2=0 a3=7ffe1ee3b76c items=0 ppid=3039 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:44.956000 audit[3620]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:17:44.956000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe1ee3b780 a2=0 a3=7ffe1ee3b76c items=0 ppid=3039 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:44.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:17:45.789179 kubelet[2882]: E0128 06:17:45.789140 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:46.274102 kubelet[2882]: E0128 06:17:46.273887 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:46.794447 kubelet[2882]: E0128 06:17:46.793596 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:48.273370 kubelet[2882]: E0128 06:17:48.272719 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:50.272970 kubelet[2882]: E0128 06:17:50.272753 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:51.247653 containerd[1608]: time="2026-01-28T06:17:51.246722917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:51.252014 containerd[1608]: time="2026-01-28T06:17:51.251869674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 06:17:51.255982 containerd[1608]: time="2026-01-28T06:17:51.255717225Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:51.261845 containerd[1608]: time="2026-01-28T06:17:51.261600334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:17:51.262580 containerd[1608]: time="2026-01-28T06:17:51.262547256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.45193673s" Jan 28 06:17:51.262653 containerd[1608]: time="2026-01-28T06:17:51.262580518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 06:17:51.315340 containerd[1608]: time="2026-01-28T06:17:51.312020643Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 06:17:51.345136 containerd[1608]: time="2026-01-28T06:17:51.344874812Z" level=info msg="Container 81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:17:51.374174 containerd[1608]: time="2026-01-28T06:17:51.373823140Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7\"" Jan 28 06:17:51.376467 containerd[1608]: time="2026-01-28T06:17:51.375820902Z" level=info msg="StartContainer for \"81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7\"" Jan 28 06:17:51.382649 containerd[1608]: time="2026-01-28T06:17:51.381786656Z" level=info msg="connecting to shim 81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7" address="unix:///run/containerd/s/794632d418f8dca7a3cb2e2fa171eb92f62da211456f38855dfb8c0e6e10d5d1" protocol=ttrpc version=3 Jan 28 06:17:51.625545 systemd[1]: Started cri-containerd-81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7.scope - libcontainer container 81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7. Jan 28 06:17:51.885000 audit: BPF prog-id=169 op=LOAD Jan 28 06:17:51.893436 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 06:17:51.893538 kernel: audit: type=1334 audit(1769581071.885:574): prog-id=169 op=LOAD Jan 28 06:17:51.885000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:51.946831 kernel: audit: type=1300 audit(1769581071.885:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:51.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:51.989565 kernel: audit: type=1327 audit(1769581071.885:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:51.885000 audit: BPF prog-id=170 op=LOAD Jan 28 06:17:51.885000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:52.042861 kernel: audit: type=1334 audit(1769581071.885:575): prog-id=170 op=LOAD Jan 28 06:17:52.042987 kernel: audit: type=1300 audit(1769581071.885:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:52.043457 kernel: audit: type=1327 audit(1769581071.885:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:51.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:52.082726 kernel: audit: type=1334 audit(1769581071.885:576): prog-id=170 op=UNLOAD Jan 28 06:17:51.885000 audit: BPF prog-id=170 op=UNLOAD Jan 28 06:17:52.092574 kernel: audit: type=1300 audit(1769581071.885:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:51.885000 audit[3629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:51.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:52.168455 kernel: audit: type=1327 audit(1769581071.885:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:51.885000 audit: BPF prog-id=169 op=UNLOAD Jan 28 06:17:51.885000 audit[3629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:52.181598 kernel: audit: type=1334 audit(1769581071.885:577): prog-id=169 op=UNLOAD Jan 28 06:17:51.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:51.885000 audit: BPF prog-id=171 op=LOAD Jan 28 06:17:51.885000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3414 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:17:51.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831626131346364646661346431303535636139326535323830616564 Jan 28 06:17:52.223518 containerd[1608]: time="2026-01-28T06:17:52.222921614Z" level=info msg="StartContainer for \"81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7\" returns successfully" Jan 28 06:17:52.286545 kubelet[2882]: E0128 06:17:52.282889 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:52.878500 kubelet[2882]: E0128 06:17:52.877464 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:53.946600 kubelet[2882]: E0128 06:17:53.946174 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:54.273540 kubelet[2882]: E0128 06:17:54.272604 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:55.752839 systemd[1]: cri-containerd-81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7.scope: Deactivated successfully. Jan 28 06:17:55.753947 systemd[1]: cri-containerd-81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7.scope: Consumed 4.126s CPU time, 180M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 28 06:17:55.761000 audit: BPF prog-id=171 op=UNLOAD Jan 28 06:17:55.766004 containerd[1608]: time="2026-01-28T06:17:55.765752346Z" level=info msg="received container exit event container_id:\"81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7\" id:\"81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7\" pid:3642 exited_at:{seconds:1769581075 nanos:764826293}" Jan 28 06:17:55.886428 kubelet[2882]: I0128 06:17:55.886171 2882 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 06:17:55.943628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ba14cddfa4d1055ca92e5280aedf66f06c774f610e82f24988b244d6ffc0f7-rootfs.mount: Deactivated successfully. Jan 28 06:17:56.021011 systemd[1]: Created slice kubepods-burstable-pod312d579f_6aa0_4444_b03c_b14e6ab72368.slice - libcontainer container kubepods-burstable-pod312d579f_6aa0_4444_b03c_b14e6ab72368.slice. Jan 28 06:17:56.034962 kubelet[2882]: I0128 06:17:56.034612 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4d6v\" (UniqueName: \"kubernetes.io/projected/c2d1933c-8984-4259-baec-74c0446170e9-kube-api-access-k4d6v\") pod \"coredns-668d6bf9bc-6k649\" (UID: \"c2d1933c-8984-4259-baec-74c0446170e9\") " pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:17:56.034962 kubelet[2882]: I0128 06:17:56.034668 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9nl\" (UniqueName: \"kubernetes.io/projected/312d579f-6aa0-4444-b03c-b14e6ab72368-kube-api-access-6h9nl\") pod \"coredns-668d6bf9bc-jw4ll\" (UID: \"312d579f-6aa0-4444-b03c-b14e6ab72368\") " pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:17:56.034962 kubelet[2882]: I0128 06:17:56.034688 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/312d579f-6aa0-4444-b03c-b14e6ab72368-config-volume\") pod \"coredns-668d6bf9bc-jw4ll\" (UID: \"312d579f-6aa0-4444-b03c-b14e6ab72368\") " pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:17:56.034962 kubelet[2882]: I0128 06:17:56.034705 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8hb\" (UniqueName: \"kubernetes.io/projected/17337418-3675-4e8a-a365-9d0165d3a261-kube-api-access-2b8hb\") pod \"calico-kube-controllers-7765448cd-4smp8\" (UID: \"17337418-3675-4e8a-a365-9d0165d3a261\") " pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:17:56.034962 kubelet[2882]: I0128 06:17:56.034721 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d1933c-8984-4259-baec-74c0446170e9-config-volume\") pod \"coredns-668d6bf9bc-6k649\" (UID: \"c2d1933c-8984-4259-baec-74c0446170e9\") " pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:17:56.038896 kubelet[2882]: I0128 06:17:56.034737 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17337418-3675-4e8a-a365-9d0165d3a261-tigera-ca-bundle\") pod \"calico-kube-controllers-7765448cd-4smp8\" (UID: \"17337418-3675-4e8a-a365-9d0165d3a261\") " pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:17:56.044866 systemd[1]: Created slice kubepods-burstable-podc2d1933c_8984_4259_baec_74c0446170e9.slice - libcontainer container kubepods-burstable-podc2d1933c_8984_4259_baec_74c0446170e9.slice. Jan 28 06:17:56.089941 systemd[1]: Created slice kubepods-besteffort-pod17337418_3675_4e8a_a365_9d0165d3a261.slice - libcontainer container kubepods-besteffort-pod17337418_3675_4e8a_a365_9d0165d3a261.slice. Jan 28 06:17:56.123613 systemd[1]: Created slice kubepods-besteffort-pod01b11439_90de_4b7d_a912_61c1cb955515.slice - libcontainer container kubepods-besteffort-pod01b11439_90de_4b7d_a912_61c1cb955515.slice. Jan 28 06:17:56.162417 systemd[1]: Created slice kubepods-besteffort-podcfce9196_a7e4_4009_b111_fc598ada449a.slice - libcontainer container kubepods-besteffort-podcfce9196_a7e4_4009_b111_fc598ada449a.slice. Jan 28 06:17:56.183784 systemd[1]: Created slice kubepods-besteffort-podc5caa60c_0db4_4584_8ceb_7bfff587bf9e.slice - libcontainer container kubepods-besteffort-podc5caa60c_0db4_4584_8ceb_7bfff587bf9e.slice. Jan 28 06:17:56.237935 kubelet[2882]: I0128 06:17:56.237903 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxvj\" (UniqueName: \"kubernetes.io/projected/c5caa60c-0db4-4584-8ceb-7bfff587bf9e-kube-api-access-ckxvj\") pod \"calico-apiserver-6f7d9cfbc8-r2r59\" (UID: \"c5caa60c-0db4-4584-8ceb-7bfff587bf9e\") " pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:17:56.238894 kubelet[2882]: I0128 06:17:56.238871 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/32744aca-35af-454a-aa76-f78a1f5cf3bf-goldmane-key-pair\") pod \"goldmane-666569f655-hpjqb\" (UID: \"32744aca-35af-454a-aa76-f78a1f5cf3bf\") " pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:56.239038 kubelet[2882]: I0128 06:17:56.239017 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01b11439-90de-4b7d-a912-61c1cb955515-whisker-backend-key-pair\") pod \"whisker-65d69855dd-wqmsp\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:17:56.239483 kubelet[2882]: I0128 06:17:56.239468 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01b11439-90de-4b7d-a912-61c1cb955515-whisker-ca-bundle\") pod \"whisker-65d69855dd-wqmsp\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:17:56.239557 kubelet[2882]: I0128 06:17:56.239545 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lvf\" (UniqueName: \"kubernetes.io/projected/32744aca-35af-454a-aa76-f78a1f5cf3bf-kube-api-access-c7lvf\") pod \"goldmane-666569f655-hpjqb\" (UID: \"32744aca-35af-454a-aa76-f78a1f5cf3bf\") " pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:56.239646 kubelet[2882]: I0128 06:17:56.239629 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cfce9196-a7e4-4009-b111-fc598ada449a-calico-apiserver-certs\") pod \"calico-apiserver-6f7d9cfbc8-cgc65\" (UID: \"cfce9196-a7e4-4009-b111-fc598ada449a\") " pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:17:56.239752 kubelet[2882]: I0128 06:17:56.239735 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5caa60c-0db4-4584-8ceb-7bfff587bf9e-calico-apiserver-certs\") pod \"calico-apiserver-6f7d9cfbc8-r2r59\" (UID: \"c5caa60c-0db4-4584-8ceb-7bfff587bf9e\") " pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:17:56.239858 kubelet[2882]: I0128 06:17:56.239840 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvxq\" (UniqueName: \"kubernetes.io/projected/cfce9196-a7e4-4009-b111-fc598ada449a-kube-api-access-sqvxq\") pod \"calico-apiserver-6f7d9cfbc8-cgc65\" (UID: \"cfce9196-a7e4-4009-b111-fc598ada449a\") " pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:17:56.239963 kubelet[2882]: I0128 06:17:56.239944 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32744aca-35af-454a-aa76-f78a1f5cf3bf-config\") pod \"goldmane-666569f655-hpjqb\" (UID: \"32744aca-35af-454a-aa76-f78a1f5cf3bf\") " pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:56.240175 kubelet[2882]: I0128 06:17:56.240035 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32744aca-35af-454a-aa76-f78a1f5cf3bf-goldmane-ca-bundle\") pod \"goldmane-666569f655-hpjqb\" (UID: \"32744aca-35af-454a-aa76-f78a1f5cf3bf\") " pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:56.240495 kubelet[2882]: I0128 06:17:56.240474 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zsl\" (UniqueName: \"kubernetes.io/projected/01b11439-90de-4b7d-a912-61c1cb955515-kube-api-access-z5zsl\") pod \"whisker-65d69855dd-wqmsp\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:17:56.250722 systemd[1]: Created slice kubepods-besteffort-pod32744aca_35af_454a_aa76_f78a1f5cf3bf.slice - libcontainer container kubepods-besteffort-pod32744aca_35af_454a_aa76_f78a1f5cf3bf.slice. Jan 28 06:17:56.292975 systemd[1]: Created slice kubepods-besteffort-pod8aa84ac8_30ec_47f2_bd77_dd7e1d57eb66.slice - libcontainer container kubepods-besteffort-pod8aa84ac8_30ec_47f2_bd77_dd7e1d57eb66.slice. Jan 28 06:17:56.305391 containerd[1608]: time="2026-01-28T06:17:56.304473154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:56.352665 kubelet[2882]: E0128 06:17:56.352624 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:56.367431 kubelet[2882]: E0128 06:17:56.363434 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:56.377991 containerd[1608]: time="2026-01-28T06:17:56.375590939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,}" Jan 28 06:17:56.386510 containerd[1608]: time="2026-01-28T06:17:56.384660984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,}" Jan 28 06:17:56.407688 containerd[1608]: time="2026-01-28T06:17:56.407029549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:56.463621 containerd[1608]: time="2026-01-28T06:17:56.463573247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d69855dd-wqmsp,Uid:01b11439-90de-4b7d-a912-61c1cb955515,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:56.501939 containerd[1608]: time="2026-01-28T06:17:56.501170020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:17:56.571508 containerd[1608]: time="2026-01-28T06:17:56.570694544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,}" Jan 28 06:17:56.783492 containerd[1608]: time="2026-01-28T06:17:56.782552589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:17:57.040623 kubelet[2882]: E0128 06:17:57.039860 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:17:57.059177 containerd[1608]: time="2026-01-28T06:17:57.058838280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 06:17:57.286419 containerd[1608]: time="2026-01-28T06:17:57.284844110Z" level=error msg="Failed to destroy network for sandbox \"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.294996 systemd[1]: run-netns-cni\x2dcbb48955\x2d917b\x2d907a\x2da1ec\x2de3de098830ef.mount: Deactivated successfully. Jan 28 06:17:57.340468 containerd[1608]: time="2026-01-28T06:17:57.339917261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.345398 kubelet[2882]: E0128 06:17:57.344551 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.345398 kubelet[2882]: E0128 06:17:57.344763 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:17:57.345398 kubelet[2882]: E0128 06:17:57.344798 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:17:57.345726 kubelet[2882]: E0128 06:17:57.344857 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jw4ll_kube-system(312d579f-6aa0-4444-b03c-b14e6ab72368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jw4ll_kube-system(312d579f-6aa0-4444-b03c-b14e6ab72368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8296f38b59dee29e89a2faffa73a2d812257d272fdbb37f3c9c73dd47ff0e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jw4ll" podUID="312d579f-6aa0-4444-b03c-b14e6ab72368" Jan 28 06:17:57.350979 containerd[1608]: time="2026-01-28T06:17:57.350732625Z" level=error msg="Failed to destroy network for sandbox \"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.358619 systemd[1]: run-netns-cni\x2d09dd25e8\x2d934b\x2db174\x2d8d44\x2d32537098b609.mount: Deactivated successfully. Jan 28 06:17:57.397486 containerd[1608]: time="2026-01-28T06:17:57.396783705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.399640 kubelet[2882]: E0128 06:17:57.398662 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.399640 kubelet[2882]: E0128 06:17:57.398752 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:57.399640 kubelet[2882]: E0128 06:17:57.398780 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:17:57.399830 kubelet[2882]: E0128 06:17:57.398831 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c0960ab03eda1099b853385793f6be3f7db8d4939bf4677b15c22eb89d5c3a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:17:57.454943 containerd[1608]: time="2026-01-28T06:17:57.454767382Z" level=error msg="Failed to destroy network for sandbox \"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.460834 systemd[1]: run-netns-cni\x2d3b4a4b03\x2d9dae\x2de1ef\x2deaf1\x2d24a1aa62bb7a.mount: Deactivated successfully. Jan 28 06:17:57.476536 containerd[1608]: time="2026-01-28T06:17:57.472608377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.476871 kubelet[2882]: E0128 06:17:57.476585 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.476871 kubelet[2882]: E0128 06:17:57.476641 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:17:57.476871 kubelet[2882]: E0128 06:17:57.476662 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:17:57.476973 kubelet[2882]: E0128 06:17:57.476829 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"965c58b0557c01b8eed2b352afd782ac94956ec33d8f4b20e1d60df7170a2a6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:17:57.516572 containerd[1608]: time="2026-01-28T06:17:57.515557760Z" level=error msg="Failed to destroy network for sandbox \"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.526777 systemd[1]: run-netns-cni\x2d93cb413c\x2d80f1\x2def92\x2dcf1c\x2d9d17276c0e00.mount: Deactivated successfully. Jan 28 06:17:57.558673 containerd[1608]: time="2026-01-28T06:17:57.557485456Z" level=error msg="Failed to destroy network for sandbox \"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.579700 containerd[1608]: time="2026-01-28T06:17:57.578538466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.580175 kubelet[2882]: E0128 06:17:57.579518 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.580175 kubelet[2882]: E0128 06:17:57.579575 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:57.580175 kubelet[2882]: E0128 06:17:57.579595 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:17:57.580542 kubelet[2882]: E0128 06:17:57.579834 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a29d443aabc44196f8918ae06c1d5077c1de1e60c06a9a4a2cfed3fa0c3630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:17:57.586614 containerd[1608]: time="2026-01-28T06:17:57.586394169Z" level=error msg="Failed to destroy network for sandbox \"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.596568 containerd[1608]: time="2026-01-28T06:17:57.595630886Z" level=error msg="Failed to destroy network for sandbox \"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.599605 containerd[1608]: time="2026-01-28T06:17:57.599565416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.601422 kubelet[2882]: E0128 06:17:57.600907 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.601422 kubelet[2882]: E0128 06:17:57.601001 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:17:57.601563 kubelet[2882]: E0128 06:17:57.601031 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:17:57.601829 kubelet[2882]: E0128 06:17:57.601793 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6k649_kube-system(c2d1933c-8984-4259-baec-74c0446170e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6k649_kube-system(c2d1933c-8984-4259-baec-74c0446170e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"979969dccd43dc773e1d26031ecc25890e75f7a55b844b39f01a95473ee855f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6k649" podUID="c2d1933c-8984-4259-baec-74c0446170e9" Jan 28 06:17:57.607682 containerd[1608]: time="2026-01-28T06:17:57.606688802Z" level=error msg="Failed to destroy network for sandbox \"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.609732 containerd[1608]: time="2026-01-28T06:17:57.609694056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d69855dd-wqmsp,Uid:01b11439-90de-4b7d-a912-61c1cb955515,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.613733 kubelet[2882]: E0128 06:17:57.613147 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.613733 kubelet[2882]: E0128 06:17:57.613419 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:17:57.613733 kubelet[2882]: E0128 06:17:57.613447 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:17:57.613913 kubelet[2882]: E0128 06:17:57.613633 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65d69855dd-wqmsp_calico-system(01b11439-90de-4b7d-a912-61c1cb955515)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65d69855dd-wqmsp_calico-system(01b11439-90de-4b7d-a912-61c1cb955515)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09f9a58f305233470d6843ab7e21a5a66d34274268968c327b615b8b00f39c66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d69855dd-wqmsp" podUID="01b11439-90de-4b7d-a912-61c1cb955515" Jan 28 06:17:57.615889 containerd[1608]: time="2026-01-28T06:17:57.615404566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.616465 kubelet[2882]: E0128 06:17:57.615579 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.616465 kubelet[2882]: E0128 06:17:57.615628 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:17:57.616465 kubelet[2882]: E0128 06:17:57.615650 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:17:57.616586 kubelet[2882]: E0128 06:17:57.615828 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a46820aee73a5cc662c23ab851c6e4c612c9eda9b23ffb37c2d9ef12fd57f92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:17:57.627423 containerd[1608]: time="2026-01-28T06:17:57.626744037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.629701 kubelet[2882]: E0128 06:17:57.629379 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:17:57.629701 kubelet[2882]: E0128 06:17:57.629428 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:17:57.629701 kubelet[2882]: E0128 06:17:57.629445 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:17:57.629871 kubelet[2882]: E0128 06:17:57.629473 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaa8fa1a347c063d0d90a5b55ba9dcaca8b9157d7386a583e59bfe993a4e6447\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:17:57.946778 systemd[1]: run-netns-cni\x2da711fb55\x2df4c3\x2dfc3b\x2dbec5\x2d7be032a7126b.mount: Deactivated successfully. Jan 28 06:17:57.947039 systemd[1]: run-netns-cni\x2dbc75e6be\x2d255f\x2d09bb\x2d78e8\x2d333e2da472ee.mount: Deactivated successfully. Jan 28 06:17:57.947519 systemd[1]: run-netns-cni\x2dfcebda6e\x2da7d4\x2d6d78\x2d49b6\x2de10346abc0ac.mount: Deactivated successfully. Jan 28 06:17:57.947615 systemd[1]: run-netns-cni\x2d2732b694\x2d3f48\x2d8651\x2df1db\x2d2dc73c948ed8.mount: Deactivated successfully. Jan 28 06:18:02.719522 update_engine[1584]: I20260128 06:18:02.717626 1584 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 06:18:02.719522 update_engine[1584]: I20260128 06:18:02.717682 1584 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 06:18:02.720605 update_engine[1584]: I20260128 06:18:02.720578 1584 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 06:18:02.727723 update_engine[1584]: I20260128 06:18:02.724798 1584 omaha_request_params.cc:62] Current group set to developer Jan 28 06:18:02.733721 update_engine[1584]: I20260128 06:18:02.733695 1584 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 06:18:02.733791 update_engine[1584]: I20260128 06:18:02.733775 1584 update_attempter.cc:643] Scheduling an action processor start. Jan 28 06:18:02.733852 update_engine[1584]: I20260128 06:18:02.733839 1584 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 06:18:02.745818 update_engine[1584]: I20260128 06:18:02.745056 1584 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 06:18:02.745818 update_engine[1584]: I20260128 06:18:02.745486 1584 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 06:18:02.745818 update_engine[1584]: I20260128 06:18:02.745499 1584 omaha_request_action.cc:272] Request: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: Jan 28 06:18:02.745818 update_engine[1584]: I20260128 06:18:02.745612 1584 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:18:02.768962 update_engine[1584]: I20260128 06:18:02.768919 1584 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:18:02.771490 update_engine[1584]: I20260128 06:18:02.771449 1584 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:18:02.785662 locksmithd[1676]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 06:18:02.786995 update_engine[1584]: E20260128 06:18:02.786826 1584 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 06:18:02.787060 update_engine[1584]: I20260128 06:18:02.787022 1584 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 06:18:11.280606 kubelet[2882]: E0128 06:18:11.279713 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:11.282575 containerd[1608]: time="2026-01-28T06:18:11.281643333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,}" Jan 28 06:18:11.284064 containerd[1608]: time="2026-01-28T06:18:11.283082062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:11.284567 kubelet[2882]: E0128 06:18:11.283719 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:11.284853 containerd[1608]: time="2026-01-28T06:18:11.283946316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d69855dd-wqmsp,Uid:01b11439-90de-4b7d-a912-61c1cb955515,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:11.284955 containerd[1608]: time="2026-01-28T06:18:11.284782203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,}" Jan 28 06:18:11.833771 containerd[1608]: time="2026-01-28T06:18:11.833044199Z" level=error msg="Failed to destroy network for sandbox \"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.842074 systemd[1]: run-netns-cni\x2d6d7acf82\x2d8d7c\x2d81b5\x2dc00c\x2dd8cb1996c4bb.mount: Deactivated successfully. Jan 28 06:18:11.892761 containerd[1608]: time="2026-01-28T06:18:11.889786458Z" level=error msg="Failed to destroy network for sandbox \"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.901765 containerd[1608]: time="2026-01-28T06:18:11.900361458Z" level=error msg="Failed to destroy network for sandbox \"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.909005 containerd[1608]: time="2026-01-28T06:18:11.908954721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d69855dd-wqmsp,Uid:01b11439-90de-4b7d-a912-61c1cb955515,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.915079 kubelet[2882]: E0128 06:18:11.915024 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.915807 kubelet[2882]: E0128 06:18:11.915771 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:18:11.915926 kubelet[2882]: E0128 06:18:11.915903 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d69855dd-wqmsp" Jan 28 06:18:11.916069 kubelet[2882]: E0128 06:18:11.916033 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65d69855dd-wqmsp_calico-system(01b11439-90de-4b7d-a912-61c1cb955515)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65d69855dd-wqmsp_calico-system(01b11439-90de-4b7d-a912-61c1cb955515)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ba20830dd8a6264d1b542c5eff71ca335f0562106a0e255be10df9a3356990\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d69855dd-wqmsp" podUID="01b11439-90de-4b7d-a912-61c1cb955515" Jan 28 06:18:11.920812 containerd[1608]: time="2026-01-28T06:18:11.920630553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.928753 containerd[1608]: time="2026-01-28T06:18:11.928704110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.931072 kubelet[2882]: E0128 06:18:11.930727 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.931072 kubelet[2882]: E0128 06:18:11.930798 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:18:11.931072 kubelet[2882]: E0128 06:18:11.930828 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw4ll" Jan 28 06:18:11.933998 kubelet[2882]: E0128 06:18:11.933954 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jw4ll_kube-system(312d579f-6aa0-4444-b03c-b14e6ab72368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jw4ll_kube-system(312d579f-6aa0-4444-b03c-b14e6ab72368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a0ce3e49f4b08bda8670d3638f62456c7549b06afd30d10da38106a9501a0b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jw4ll" podUID="312d579f-6aa0-4444-b03c-b14e6ab72368" Jan 28 06:18:11.938779 kubelet[2882]: E0128 06:18:11.931534 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.938779 kubelet[2882]: E0128 06:18:11.937683 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:18:11.938779 kubelet[2882]: E0128 06:18:11.937733 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwxrt" Jan 28 06:18:11.938945 containerd[1608]: time="2026-01-28T06:18:11.938069507Z" level=error msg="Failed to destroy network for sandbox \"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.938997 kubelet[2882]: E0128 06:18:11.937807 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461552c9ed21b6d529e6f4f68882ec14f46aaeb70ea4f05fd48ef6df4dd861ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:18:11.964794 containerd[1608]: time="2026-01-28T06:18:11.962699703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.966030 kubelet[2882]: E0128 06:18:11.965857 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:11.966094 kubelet[2882]: E0128 06:18:11.966063 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:18:11.968505 kubelet[2882]: E0128 06:18:11.966098 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6k649" Jan 28 06:18:11.968505 kubelet[2882]: E0128 06:18:11.966496 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6k649_kube-system(c2d1933c-8984-4259-baec-74c0446170e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6k649_kube-system(c2d1933c-8984-4259-baec-74c0446170e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16615f399a7d7695dcb18d64abf8eb35778280f60aee501cb0d53edeaa58f875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6k649" podUID="c2d1933c-8984-4259-baec-74c0446170e9" Jan 28 06:18:12.275934 containerd[1608]: time="2026-01-28T06:18:12.274832662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:18:12.276966 containerd[1608]: time="2026-01-28T06:18:12.276802752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:12.278451 containerd[1608]: time="2026-01-28T06:18:12.277914848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:18:12.360973 systemd[1]: run-netns-cni\x2d92d842db\x2d723a\x2db9e8\x2d0ca2\x2d939d08ad9511.mount: Deactivated successfully. Jan 28 06:18:12.361101 systemd[1]: run-netns-cni\x2d5dcd80fe\x2d2596\x2dbf60\x2dfdfa\x2d9c613984f620.mount: Deactivated successfully. Jan 28 06:18:12.361616 systemd[1]: run-netns-cni\x2d341908f9\x2d772e\x2d1437\x2d861f\x2db8c87e025858.mount: Deactivated successfully. Jan 28 06:18:12.636804 containerd[1608]: time="2026-01-28T06:18:12.623888069Z" level=error msg="Failed to destroy network for sandbox \"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.644550 containerd[1608]: time="2026-01-28T06:18:12.644103502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.648667 systemd[1]: run-netns-cni\x2da6e8ca0e\x2d3d19\x2d72d3\x2d6063\x2d380c61831e11.mount: Deactivated successfully. Jan 28 06:18:12.659088 kubelet[2882]: E0128 06:18:12.658882 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.661837 kubelet[2882]: E0128 06:18:12.660667 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:18:12.661837 kubelet[2882]: E0128 06:18:12.661708 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" Jan 28 06:18:12.664829 kubelet[2882]: E0128 06:18:12.664693 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fda21857f2ad063b967d87c7b36878f1d37b66b58920b5162e36db72ad89339\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:18:12.713799 containerd[1608]: time="2026-01-28T06:18:12.713732827Z" level=error msg="Failed to destroy network for sandbox \"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.726899 update_engine[1584]: I20260128 06:18:12.722807 1584 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:18:12.726899 update_engine[1584]: I20260128 06:18:12.725544 1584 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:18:12.725929 systemd[1]: run-netns-cni\x2da191deb3\x2df459\x2ddd5f\x2dc685\x2dbf5e94bbe313.mount: Deactivated successfully. Jan 28 06:18:12.729096 update_engine[1584]: I20260128 06:18:12.727799 1584 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:18:12.740690 containerd[1608]: time="2026-01-28T06:18:12.739929295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.742030 kubelet[2882]: E0128 06:18:12.741925 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.742634 kubelet[2882]: E0128 06:18:12.742613 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:18:12.742971 kubelet[2882]: E0128 06:18:12.742950 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" Jan 28 06:18:12.745600 kubelet[2882]: E0128 06:18:12.745568 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c14bfb8a6e3bea52ba89c1ab52b67984f4f67c5275b90e9cf39d297c071eb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:18:12.745903 update_engine[1584]: E20260128 06:18:12.745635 1584 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 06:18:12.745903 update_engine[1584]: I20260128 06:18:12.745726 1584 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 06:18:12.780651 containerd[1608]: time="2026-01-28T06:18:12.779968673Z" level=error msg="Failed to destroy network for sandbox \"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.791094 systemd[1]: run-netns-cni\x2d59a92c26\x2d61ae\x2d2158\x2d6e03\x2de722f6e96635.mount: Deactivated successfully. Jan 28 06:18:12.793624 containerd[1608]: time="2026-01-28T06:18:12.793040861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.795475 kubelet[2882]: E0128 06:18:12.794977 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:12.795587 kubelet[2882]: E0128 06:18:12.795542 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:18:12.795615 kubelet[2882]: E0128 06:18:12.795581 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" Jan 28 06:18:12.796999 kubelet[2882]: E0128 06:18:12.796619 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c467bd24815224ba20b7bfdcb8f466ebe47dd14cc6b34602dc6cb624d57f87ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:13.286084 containerd[1608]: time="2026-01-28T06:18:13.286038627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:13.734675 containerd[1608]: time="2026-01-28T06:18:13.731651355Z" level=error msg="Failed to destroy network for sandbox \"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:13.739090 systemd[1]: run-netns-cni\x2da21492c4\x2d678a\x2de804\x2db03c\x2d2be9c701c1a3.mount: Deactivated successfully. Jan 28 06:18:13.746958 containerd[1608]: time="2026-01-28T06:18:13.746881624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:13.748508 kubelet[2882]: E0128 06:18:13.747838 2882 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 06:18:13.748508 kubelet[2882]: E0128 06:18:13.747918 2882 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:18:13.748508 kubelet[2882]: E0128 06:18:13.747957 2882 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hpjqb" Jan 28 06:18:13.748968 kubelet[2882]: E0128 06:18:13.748095 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"807be16b1726f7f3f3ed511dc2cdd9b137a6e2109ad8b17fe12850690c1765ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:18:14.273877 kubelet[2882]: E0128 06:18:14.273657 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:19.282596 kubelet[2882]: E0128 06:18:19.274851 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:19.475602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2784817708.mount: Deactivated successfully. Jan 28 06:18:19.623620 containerd[1608]: time="2026-01-28T06:18:19.623483543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:18:19.626815 containerd[1608]: time="2026-01-28T06:18:19.626761791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 06:18:19.649548 containerd[1608]: time="2026-01-28T06:18:19.649054309Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:18:19.655886 containerd[1608]: time="2026-01-28T06:18:19.654878582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 06:18:19.660788 containerd[1608]: time="2026-01-28T06:18:19.659975397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 22.601098185s" Jan 28 06:18:19.660788 containerd[1608]: time="2026-01-28T06:18:19.660521475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 06:18:19.743811 containerd[1608]: time="2026-01-28T06:18:19.743748299Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 06:18:19.802764 containerd[1608]: time="2026-01-28T06:18:19.799738177Z" level=info msg="Container 3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:18:19.847037 containerd[1608]: time="2026-01-28T06:18:19.846747608Z" level=info msg="CreateContainer within sandbox \"aa123b0062ec6bec578211a375401449dd21735c1a1c7a79a8bbe967c51674ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab\"" Jan 28 06:18:19.870139 containerd[1608]: time="2026-01-28T06:18:19.869712801Z" level=info msg="StartContainer for \"3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab\"" Jan 28 06:18:19.877735 containerd[1608]: time="2026-01-28T06:18:19.876692635Z" level=info msg="connecting to shim 3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab" address="unix:///run/containerd/s/794632d418f8dca7a3cb2e2fa171eb92f62da211456f38855dfb8c0e6e10d5d1" protocol=ttrpc version=3 Jan 28 06:18:20.048697 systemd[1]: Started cri-containerd-3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab.scope - libcontainer container 3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab. Jan 28 06:18:20.256000 audit: BPF prog-id=172 op=LOAD Jan 28 06:18:20.266888 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 06:18:20.266991 kernel: audit: type=1334 audit(1769581100.256:580): prog-id=172 op=LOAD Jan 28 06:18:20.256000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.330946 kernel: audit: type=1300 audit(1769581100.256:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.376505 kernel: audit: type=1327 audit(1769581100.256:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.379944 kernel: audit: type=1334 audit(1769581100.257:581): prog-id=173 op=LOAD Jan 28 06:18:20.257000 audit: BPF prog-id=173 op=LOAD Jan 28 06:18:20.257000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.444647 kernel: audit: type=1300 audit(1769581100.257:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.446098 kernel: audit: type=1327 audit(1769581100.257:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.257000 audit: BPF prog-id=173 op=UNLOAD Jan 28 06:18:20.510936 kernel: audit: type=1334 audit(1769581100.257:582): prog-id=173 op=UNLOAD Jan 28 06:18:20.511021 kernel: audit: type=1300 audit(1769581100.257:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.257000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.553690 containerd[1608]: time="2026-01-28T06:18:20.551148278Z" level=info msg="StartContainer for \"3419fecd1eb7a71dacafca5bd7a22463e59eaf143df67bab8ec51510e57c90ab\" returns successfully" Jan 28 06:18:20.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.606848 kernel: audit: type=1327 audit(1769581100.257:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.257000 audit: BPF prog-id=172 op=UNLOAD Jan 28 06:18:20.626450 kernel: audit: type=1334 audit(1769581100.257:583): prog-id=172 op=UNLOAD Jan 28 06:18:20.257000 audit[4195]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:20.257000 audit: BPF prog-id=174 op=LOAD Jan 28 06:18:20.257000 audit[4195]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3414 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:20.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334313966656364316562376137316461636166636135626437613232 Jan 28 06:18:21.154682 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 06:18:21.158149 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 06:18:21.362090 kubelet[2882]: E0128 06:18:21.361853 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:21.806027 kubelet[2882]: I0128 06:18:21.805772 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5phmn" podStartSLOduration=3.510722191 podStartE2EDuration="43.805744287s" podCreationTimestamp="2026-01-28 06:17:38 +0000 UTC" firstStartedPulling="2026-01-28 06:17:39.368613066 +0000 UTC m=+34.550879591" lastFinishedPulling="2026-01-28 06:18:19.663635161 +0000 UTC m=+74.845901687" observedRunningTime="2026-01-28 06:18:21.532934613 +0000 UTC m=+76.715201139" watchObservedRunningTime="2026-01-28 06:18:21.805744287 +0000 UTC m=+76.988010814" Jan 28 06:18:21.957523 kubelet[2882]: I0128 06:18:21.955729 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01b11439-90de-4b7d-a912-61c1cb955515-whisker-ca-bundle\") pod \"01b11439-90de-4b7d-a912-61c1cb955515\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " Jan 28 06:18:21.957523 kubelet[2882]: I0128 06:18:21.955791 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5zsl\" (UniqueName: \"kubernetes.io/projected/01b11439-90de-4b7d-a912-61c1cb955515-kube-api-access-z5zsl\") pod \"01b11439-90de-4b7d-a912-61c1cb955515\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " Jan 28 06:18:21.957523 kubelet[2882]: I0128 06:18:21.955811 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01b11439-90de-4b7d-a912-61c1cb955515-whisker-backend-key-pair\") pod \"01b11439-90de-4b7d-a912-61c1cb955515\" (UID: \"01b11439-90de-4b7d-a912-61c1cb955515\") " Jan 28 06:18:21.957523 kubelet[2882]: I0128 06:18:21.956941 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b11439-90de-4b7d-a912-61c1cb955515-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "01b11439-90de-4b7d-a912-61c1cb955515" (UID: "01b11439-90de-4b7d-a912-61c1cb955515"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 06:18:21.974021 systemd[1]: var-lib-kubelet-pods-01b11439\x2d90de\x2d4b7d\x2da912\x2d61c1cb955515-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 06:18:21.980084 kubelet[2882]: I0128 06:18:21.980023 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b11439-90de-4b7d-a912-61c1cb955515-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "01b11439-90de-4b7d-a912-61c1cb955515" (UID: "01b11439-90de-4b7d-a912-61c1cb955515"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 06:18:21.988475 systemd[1]: var-lib-kubelet-pods-01b11439\x2d90de\x2d4b7d\x2da912\x2d61c1cb955515-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5zsl.mount: Deactivated successfully. Jan 28 06:18:21.993529 kubelet[2882]: I0128 06:18:21.993145 2882 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b11439-90de-4b7d-a912-61c1cb955515-kube-api-access-z5zsl" (OuterVolumeSpecName: "kube-api-access-z5zsl") pod "01b11439-90de-4b7d-a912-61c1cb955515" (UID: "01b11439-90de-4b7d-a912-61c1cb955515"). InnerVolumeSpecName "kube-api-access-z5zsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 06:18:22.059947 kubelet[2882]: I0128 06:18:22.059695 2882 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01b11439-90de-4b7d-a912-61c1cb955515-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 06:18:22.068739 kubelet[2882]: I0128 06:18:22.062130 2882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5zsl\" (UniqueName: \"kubernetes.io/projected/01b11439-90de-4b7d-a912-61c1cb955515-kube-api-access-z5zsl\") on node \"localhost\" DevicePath \"\"" Jan 28 06:18:22.068739 kubelet[2882]: I0128 06:18:22.062170 2882 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01b11439-90de-4b7d-a912-61c1cb955515-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 06:18:22.372172 kubelet[2882]: E0128 06:18:22.370676 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:22.411111 systemd[1]: Removed slice kubepods-besteffort-pod01b11439_90de_4b7d_a912_61c1cb955515.slice - libcontainer container kubepods-besteffort-pod01b11439_90de_4b7d_a912_61c1cb955515.slice. Jan 28 06:18:22.715545 update_engine[1584]: I20260128 06:18:22.714798 1584 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:18:22.715545 update_engine[1584]: I20260128 06:18:22.715148 1584 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:18:22.716083 update_engine[1584]: I20260128 06:18:22.715890 1584 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:18:22.737516 update_engine[1584]: E20260128 06:18:22.737159 1584 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 06:18:22.740533 update_engine[1584]: I20260128 06:18:22.740504 1584 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 06:18:22.775699 kubelet[2882]: I0128 06:18:22.775079 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3984d8a-f7f9-40c1-99bb-6a83cd256ee4-whisker-ca-bundle\") pod \"whisker-69cbcd96f4-7mm4t\" (UID: \"c3984d8a-f7f9-40c1-99bb-6a83cd256ee4\") " pod="calico-system/whisker-69cbcd96f4-7mm4t" Jan 28 06:18:22.775699 kubelet[2882]: I0128 06:18:22.775610 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7cn8\" (UniqueName: \"kubernetes.io/projected/c3984d8a-f7f9-40c1-99bb-6a83cd256ee4-kube-api-access-z7cn8\") pod \"whisker-69cbcd96f4-7mm4t\" (UID: \"c3984d8a-f7f9-40c1-99bb-6a83cd256ee4\") " pod="calico-system/whisker-69cbcd96f4-7mm4t" Jan 28 06:18:22.775699 kubelet[2882]: I0128 06:18:22.775649 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3984d8a-f7f9-40c1-99bb-6a83cd256ee4-whisker-backend-key-pair\") pod \"whisker-69cbcd96f4-7mm4t\" (UID: \"c3984d8a-f7f9-40c1-99bb-6a83cd256ee4\") " pod="calico-system/whisker-69cbcd96f4-7mm4t" Jan 28 06:18:22.788606 systemd[1]: Created slice kubepods-besteffort-podc3984d8a_f7f9_40c1_99bb_6a83cd256ee4.slice - libcontainer container kubepods-besteffort-podc3984d8a_f7f9_40c1_99bb_6a83cd256ee4.slice. Jan 28 06:18:23.109854 containerd[1608]: time="2026-01-28T06:18:23.107731706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cbcd96f4-7mm4t,Uid:c3984d8a-f7f9-40c1-99bb-6a83cd256ee4,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:23.294726 kubelet[2882]: I0128 06:18:23.292150 2882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b11439-90de-4b7d-a912-61c1cb955515" path="/var/lib/kubelet/pods/01b11439-90de-4b7d-a912-61c1cb955515/volumes" Jan 28 06:18:24.077558 systemd-networkd[1516]: calif0b078e2d85: Link UP Jan 28 06:18:24.094836 systemd-networkd[1516]: calif0b078e2d85: Gained carrier Jan 28 06:18:24.147795 containerd[1608]: 2026-01-28 06:18:23.453 [INFO][4309] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 06:18:24.147795 containerd[1608]: 2026-01-28 06:18:23.558 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0 whisker-69cbcd96f4- calico-system c3984d8a-f7f9-40c1-99bb-6a83cd256ee4 1050 0 2026-01-28 06:18:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69cbcd96f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69cbcd96f4-7mm4t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif0b078e2d85 [] [] }} ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-" Jan 28 06:18:24.147795 containerd[1608]: 2026-01-28 06:18:23.559 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.147795 containerd[1608]: 2026-01-28 06:18:23.878 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" HandleID="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Workload="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.881 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" HandleID="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Workload="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002740b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69cbcd96f4-7mm4t", "timestamp":"2026-01-28 06:18:23.878975192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.881 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.882 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.882 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.909 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" host="localhost" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.937 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.957 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.966 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.978 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:24.148784 containerd[1608]: 2026-01-28 06:18:23.978 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" host="localhost" Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:23.985 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7 Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:24.001 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" host="localhost" Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:24.014 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" host="localhost" Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:24.014 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" host="localhost" Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:24.014 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:24.149539 containerd[1608]: 2026-01-28 06:18:24.014 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" HandleID="k8s-pod-network.8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Workload="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.149799 containerd[1608]: 2026-01-28 06:18:24.027 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0", GenerateName:"whisker-69cbcd96f4-", Namespace:"calico-system", SelfLink:"", UID:"c3984d8a-f7f9-40c1-99bb-6a83cd256ee4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69cbcd96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69cbcd96f4-7mm4t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif0b078e2d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:24.149799 containerd[1608]: 2026-01-28 06:18:24.027 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.150929 containerd[1608]: 2026-01-28 06:18:24.027 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0b078e2d85 ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.150929 containerd[1608]: 2026-01-28 06:18:24.100 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.150967 containerd[1608]: 2026-01-28 06:18:24.105 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0", GenerateName:"whisker-69cbcd96f4-", Namespace:"calico-system", SelfLink:"", UID:"c3984d8a-f7f9-40c1-99bb-6a83cd256ee4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69cbcd96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7", Pod:"whisker-69cbcd96f4-7mm4t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif0b078e2d85", MAC:"02:69:52:8a:25:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:24.151485 containerd[1608]: 2026-01-28 06:18:24.140 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" Namespace="calico-system" Pod="whisker-69cbcd96f4-7mm4t" WorkloadEndpoint="localhost-k8s-whisker--69cbcd96f4--7mm4t-eth0" Jan 28 06:18:24.281545 containerd[1608]: time="2026-01-28T06:18:24.279673658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:24.407493 containerd[1608]: time="2026-01-28T06:18:24.405142700Z" level=info msg="connecting to shim 8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7" address="unix:///run/containerd/s/2b81da96f30c69f6ee3fee72f2c11cfbe883c8745e5a91fdad2fa5a540990d31" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:24.591788 systemd[1]: Started cri-containerd-8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7.scope - libcontainer container 8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7. Jan 28 06:18:24.716000 audit: BPF prog-id=175 op=LOAD Jan 28 06:18:24.725000 audit: BPF prog-id=176 op=LOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=176 op=UNLOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=177 op=LOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=178 op=LOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=178 op=UNLOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=177 op=UNLOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.725000 audit: BPF prog-id=179 op=LOAD Jan 28 06:18:24.725000 audit[4432]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4408 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:24.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836333466613465646264373736333237323131663238386565393731 Jan 28 06:18:24.737982 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:25.074682 containerd[1608]: time="2026-01-28T06:18:25.073872849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69cbcd96f4-7mm4t,Uid:c3984d8a-f7f9-40c1-99bb-6a83cd256ee4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8634fa4edbd776327211f288ee971964f3a1cadb08c9ded07e1272c45b6708c7\"" Jan 28 06:18:25.087655 containerd[1608]: time="2026-01-28T06:18:25.086921645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:18:25.126527 systemd-networkd[1516]: cali7d91fc36cca: Link UP Jan 28 06:18:25.134621 systemd-networkd[1516]: cali7d91fc36cca: Gained carrier Jan 28 06:18:25.172714 containerd[1608]: time="2026-01-28T06:18:25.172059053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:25.176156 containerd[1608]: time="2026-01-28T06:18:25.175796868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:18:25.176156 containerd[1608]: time="2026-01-28T06:18:25.175872508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:25.178579 kubelet[2882]: E0128 06:18:25.176069 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:18:25.178579 kubelet[2882]: E0128 06:18:25.176117 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:18:25.184506 kubelet[2882]: E0128 06:18:25.184153 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b0ef5b7dffa04fdda9d5c4de471b2ce8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:25.192513 containerd[1608]: time="2026-01-28T06:18:25.191761436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:18:25.203846 containerd[1608]: 2026-01-28 06:18:24.536 [INFO][4383] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 06:18:25.203846 containerd[1608]: 2026-01-28 06:18:24.586 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0 calico-kube-controllers-7765448cd- calico-system 17337418-3675-4e8a-a365-9d0165d3a261 930 0 2026-01-28 06:17:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7765448cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7765448cd-4smp8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7d91fc36cca [] [] }} ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-" Jan 28 06:18:25.203846 containerd[1608]: 2026-01-28 06:18:24.587 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.203846 containerd[1608]: 2026-01-28 06:18:24.874 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" HandleID="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Workload="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.875 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" HandleID="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Workload="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000432440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7765448cd-4smp8", "timestamp":"2026-01-28 06:18:24.874698132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.875 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.875 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.875 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.915 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" host="localhost" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.942 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.977 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:24.986 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:25.004 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:25.209652 containerd[1608]: 2026-01-28 06:18:25.004 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" host="localhost" Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.016 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66 Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.041 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" host="localhost" Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.060 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" host="localhost" Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.060 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" host="localhost" Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.061 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:25.210064 containerd[1608]: 2026-01-28 06:18:25.061 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" HandleID="k8s-pod-network.53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Workload="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.211626 containerd[1608]: 2026-01-28 06:18:25.076 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0", GenerateName:"calico-kube-controllers-7765448cd-", Namespace:"calico-system", SelfLink:"", UID:"17337418-3675-4e8a-a365-9d0165d3a261", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7765448cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7765448cd-4smp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d91fc36cca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:25.211871 containerd[1608]: 2026-01-28 06:18:25.076 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.211871 containerd[1608]: 2026-01-28 06:18:25.076 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d91fc36cca ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.211871 containerd[1608]: 2026-01-28 06:18:25.138 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.212772 containerd[1608]: 2026-01-28 06:18:25.147 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0", GenerateName:"calico-kube-controllers-7765448cd-", Namespace:"calico-system", SelfLink:"", UID:"17337418-3675-4e8a-a365-9d0165d3a261", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7765448cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66", Pod:"calico-kube-controllers-7765448cd-4smp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d91fc36cca", MAC:"92:05:5e:c6:ce:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:25.213019 containerd[1608]: 2026-01-28 06:18:25.177 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" Namespace="calico-system" Pod="calico-kube-controllers-7765448cd-4smp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7765448cd--4smp8-eth0" Jan 28 06:18:25.295555 kubelet[2882]: E0128 06:18:25.291544 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:25.295555 kubelet[2882]: E0128 06:18:25.293613 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:25.299680 containerd[1608]: time="2026-01-28T06:18:25.299151665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,}" Jan 28 06:18:25.310064 containerd[1608]: time="2026-01-28T06:18:25.309792972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:25.317065 containerd[1608]: time="2026-01-28T06:18:25.316913251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:18:25.317163 containerd[1608]: time="2026-01-28T06:18:25.317089128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:25.322665 kubelet[2882]: E0128 06:18:25.321984 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:18:25.324520 kubelet[2882]: E0128 06:18:25.322176 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:18:25.324602 kubelet[2882]: E0128 06:18:25.324142 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:25.326636 kubelet[2882]: E0128 06:18:25.325712 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:18:25.449003 kubelet[2882]: E0128 06:18:25.447005 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:18:25.599873 containerd[1608]: time="2026-01-28T06:18:25.598653527Z" level=info msg="connecting to shim 53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66" address="unix:///run/containerd/s/b317519d246a014d2647d9874c3e02413d898372142ce2a638fbcf187c0c5d5a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:25.743711 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 06:18:25.750573 kernel: audit: type=1325 audit(1769581105.704:593): table=filter:121 family=2 entries=20 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:25.750671 kernel: audit: type=1300 audit(1769581105.704:593): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeca08c4f0 a2=0 a3=7ffeca08c4dc items=0 ppid=3039 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:25.704000 audit[4563]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:25.704000 audit[4563]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeca08c4f0 a2=0 a3=7ffeca08c4dc items=0 ppid=3039 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:25.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:25.839788 kernel: audit: type=1327 audit(1769581105.704:593): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:25.852177 kernel: audit: type=1325 audit(1769581105.847:594): table=nat:122 family=2 entries=14 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:25.847000 audit[4563]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:25.847000 audit[4563]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeca08c4f0 a2=0 a3=0 items=0 ppid=3039 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:25.940145 kernel: audit: type=1300 audit(1769581105.847:594): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeca08c4f0 a2=0 a3=0 items=0 ppid=3039 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:25.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:25.965441 kernel: audit: type=1327 audit(1769581105.847:594): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:25.970611 systemd-networkd[1516]: calif0b078e2d85: Gained IPv6LL Jan 28 06:18:26.011708 systemd[1]: Started cri-containerd-53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66.scope - libcontainer container 53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66. Jan 28 06:18:26.105000 audit: BPF prog-id=180 op=LOAD Jan 28 06:18:26.121633 kernel: audit: type=1334 audit(1769581106.105:595): prog-id=180 op=LOAD Jan 28 06:18:26.135000 audit: BPF prog-id=181 op=LOAD Jan 28 06:18:26.165914 kernel: audit: type=1334 audit(1769581106.135:596): prog-id=181 op=LOAD Jan 28 06:18:26.166042 kernel: audit: type=1300 audit(1769581106.135:596): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5c211c00 a2=98 a3=1fffffffffffffff items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5c211c00 a2=98 a3=1fffffffffffffff items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.152593 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.261059 kernel: audit: type=1327 audit(1769581106.135:596): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.135000 audit: BPF prog-id=181 op=UNLOAD Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5c211bd0 a3=0 items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.135000 audit: BPF prog-id=182 op=LOAD Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5c211ae0 a2=94 a3=3 items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.135000 audit: BPF prog-id=182 op=UNLOAD Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5c211ae0 a2=94 a3=3 items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.135000 audit: BPF prog-id=183 op=LOAD Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5c211b20 a2=94 a3=7fff5c211d00 items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.135000 audit: BPF prog-id=183 op=UNLOAD Jan 28 06:18:26.135000 audit[4611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5c211b20 a2=94 a3=7fff5c211d00 items=0 ppid=4373 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 06:18:26.136000 audit: BPF prog-id=184 op=LOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=184 op=UNLOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=185 op=LOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=186 op=LOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=186 op=UNLOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=185 op=UNLOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.136000 audit: BPF prog-id=187 op=LOAD Jan 28 06:18:26.136000 audit[4582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4559 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533633831326361633839353839373963363538653263373337636335 Jan 28 06:18:26.160000 audit: BPF prog-id=188 op=LOAD Jan 28 06:18:26.160000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52722c10 a2=98 a3=3 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.160000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.160000 audit: BPF prog-id=188 op=UNLOAD Jan 28 06:18:26.160000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd52722be0 a3=0 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.160000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.161000 audit: BPF prog-id=189 op=LOAD Jan 28 06:18:26.161000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd52722a00 a2=94 a3=54428f items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.161000 audit: BPF prog-id=189 op=UNLOAD Jan 28 06:18:26.161000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd52722a00 a2=94 a3=54428f items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.262000 audit: BPF prog-id=190 op=LOAD Jan 28 06:18:26.262000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd52722a30 a2=94 a3=2 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.262000 audit: BPF prog-id=190 op=UNLOAD Jan 28 06:18:26.262000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd52722a30 a2=0 a3=2 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:26.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:26.280644 containerd[1608]: time="2026-01-28T06:18:26.279564395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:26.280644 containerd[1608]: time="2026-01-28T06:18:26.279723191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:18:26.280644 containerd[1608]: time="2026-01-28T06:18:26.279686012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,}" Jan 28 06:18:26.286658 systemd-networkd[1516]: cali7d91fc36cca: Gained IPv6LL Jan 28 06:18:26.479471 kubelet[2882]: E0128 06:18:26.476556 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:18:26.701549 systemd-networkd[1516]: calid3f73915c8f: Link UP Jan 28 06:18:26.706610 systemd-networkd[1516]: calid3f73915c8f: Gained carrier Jan 28 06:18:26.723941 containerd[1608]: time="2026-01-28T06:18:26.723909433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7765448cd-4smp8,Uid:17337418-3675-4e8a-a365-9d0165d3a261,Namespace:calico-system,Attempt:0,} returns sandbox id \"53c812cac8958979c658e2c737cc52b6e008c2aec44f6f9cd71aff0a03a16f66\"" Jan 28 06:18:26.833862 containerd[1608]: time="2026-01-28T06:18:26.829793185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:18:26.891806 containerd[1608]: 2026-01-28 06:18:25.617 [INFO][4519] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 06:18:26.891806 containerd[1608]: 2026-01-28 06:18:25.669 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0 coredns-668d6bf9bc- kube-system 312d579f-6aa0-4444-b03c-b14e6ab72368 921 0 2026-01-28 06:17:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jw4ll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid3f73915c8f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-" Jan 28 06:18:26.891806 containerd[1608]: 2026-01-28 06:18:25.669 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.891806 containerd[1608]: 2026-01-28 06:18:26.008 [INFO][4571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" HandleID="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Workload="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.013 [INFO][4571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" HandleID="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Workload="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018da50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jw4ll", "timestamp":"2026-01-28 06:18:26.008020114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.013 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.013 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.013 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.042 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" host="localhost" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.150 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.264 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.294 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.355 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:26.892556 containerd[1608]: 2026-01-28 06:18:26.356 [INFO][4571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" host="localhost" Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.394 [INFO][4571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18 Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.467 [INFO][4571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" host="localhost" Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.668 [INFO][4571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" host="localhost" Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.668 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" host="localhost" Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.668 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:26.892993 containerd[1608]: 2026-01-28 06:18:26.668 [INFO][4571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" HandleID="k8s-pod-network.e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Workload="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.893119 containerd[1608]: 2026-01-28 06:18:26.696 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"312d579f-6aa0-4444-b03c-b14e6ab72368", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jw4ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3f73915c8f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:26.893647 containerd[1608]: 2026-01-28 06:18:26.696 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.893647 containerd[1608]: 2026-01-28 06:18:26.696 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3f73915c8f ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.893647 containerd[1608]: 2026-01-28 06:18:26.707 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.893777 containerd[1608]: 2026-01-28 06:18:26.775 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"312d579f-6aa0-4444-b03c-b14e6ab72368", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18", Pod:"coredns-668d6bf9bc-jw4ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3f73915c8f", MAC:"72:ce:ba:2c:14:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:26.893777 containerd[1608]: 2026-01-28 06:18:26.860 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw4ll" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jw4ll-eth0" Jan 28 06:18:26.942646 containerd[1608]: time="2026-01-28T06:18:26.942463021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:26.947751 containerd[1608]: time="2026-01-28T06:18:26.947166452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:18:26.955850 kubelet[2882]: E0128 06:18:26.955498 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:18:26.955850 kubelet[2882]: E0128 06:18:26.955768 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:18:26.956056 kubelet[2882]: E0128 06:18:26.955923 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:26.959620 containerd[1608]: time="2026-01-28T06:18:26.957626891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:26.969519 kubelet[2882]: E0128 06:18:26.961776 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:27.237880 containerd[1608]: time="2026-01-28T06:18:27.237692475Z" level=info msg="connecting to shim e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18" address="unix:///run/containerd/s/539f815abddeb1670a163b64f98545d521219a826a63b16a97dfc52053e48b72" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:27.275940 kubelet[2882]: E0128 06:18:27.275899 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:27.315989 containerd[1608]: time="2026-01-28T06:18:27.294186072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,}" Jan 28 06:18:27.334457 containerd[1608]: time="2026-01-28T06:18:27.331073899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,}" Jan 28 06:18:27.524745 kubelet[2882]: E0128 06:18:27.523672 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:27.620000 audit: BPF prog-id=191 op=LOAD Jan 28 06:18:27.620000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd527228f0 a2=94 a3=1 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.620000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.622000 audit: BPF prog-id=191 op=UNLOAD Jan 28 06:18:27.622000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd527228f0 a2=94 a3=1 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.622000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.639000 audit: BPF prog-id=192 op=LOAD Jan 28 06:18:27.639000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd527228e0 a2=94 a3=4 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.641000 audit: BPF prog-id=192 op=UNLOAD Jan 28 06:18:27.641000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd527228e0 a2=0 a3=4 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.641000 audit: BPF prog-id=193 op=LOAD Jan 28 06:18:27.641000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd52722740 a2=94 a3=5 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.641000 audit: BPF prog-id=193 op=UNLOAD Jan 28 06:18:27.641000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd52722740 a2=0 a3=5 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.642000 audit: BPF prog-id=194 op=LOAD Jan 28 06:18:27.642000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd52722960 a2=94 a3=6 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.642000 audit: BPF prog-id=194 op=UNLOAD Jan 28 06:18:27.642000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd52722960 a2=0 a3=6 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.642000 audit: BPF prog-id=195 op=LOAD Jan 28 06:18:27.642000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd52722110 a2=94 a3=88 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.643000 audit: BPF prog-id=196 op=LOAD Jan 28 06:18:27.643000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd52721f90 a2=94 a3=2 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.643000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.648000 audit: BPF prog-id=196 op=UNLOAD Jan 28 06:18:27.648000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd52721fc0 a2=0 a3=7ffd527220c0 items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.648000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.655000 audit: BPF prog-id=195 op=UNLOAD Jan 28 06:18:27.655000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=23f3d10 a2=0 a3=d31009e9e5ba649d items=0 ppid=4373 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 06:18:27.918796 systemd[1]: Started cri-containerd-e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18.scope - libcontainer container e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18. Jan 28 06:18:27.917000 audit: BPF prog-id=197 op=LOAD Jan 28 06:18:27.917000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc916fcea0 a2=98 a3=1999999999999999 items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:27.920000 audit: BPF prog-id=197 op=UNLOAD Jan 28 06:18:27.920000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc916fce70 a3=0 items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:27.920000 audit: BPF prog-id=198 op=LOAD Jan 28 06:18:27.920000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc916fcd80 a2=94 a3=ffff items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:27.921000 audit: BPF prog-id=198 op=UNLOAD Jan 28 06:18:27.921000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc916fcd80 a2=94 a3=ffff items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:27.921000 audit: BPF prog-id=199 op=LOAD Jan 28 06:18:27.921000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc916fcdc0 a2=94 a3=7ffc916fcfa0 items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:27.921000 audit: BPF prog-id=199 op=UNLOAD Jan 28 06:18:27.921000 audit[4760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc916fcdc0 a2=94 a3=7ffc916fcfa0 items=0 ppid=4373 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:27.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 06:18:28.046544 systemd-networkd[1516]: cali94a89d84ad6: Link UP Jan 28 06:18:28.047629 systemd-networkd[1516]: cali94a89d84ad6: Gained carrier Jan 28 06:18:28.174000 audit: BPF prog-id=200 op=LOAD Jan 28 06:18:28.179000 audit: BPF prog-id=201 op=LOAD Jan 28 06:18:28.179000 audit[4717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.182000 audit: BPF prog-id=201 op=UNLOAD Jan 28 06:18:28.182000 audit[4717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.195000 audit: BPF prog-id=202 op=LOAD Jan 28 06:18:28.195000 audit[4717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.205000 audit: BPF prog-id=203 op=LOAD Jan 28 06:18:28.205000 audit[4717]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.205000 audit: BPF prog-id=203 op=UNLOAD Jan 28 06:18:28.205000 audit[4717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.205000 audit: BPF prog-id=202 op=UNLOAD Jan 28 06:18:28.205000 audit[4717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.206000 audit: BPF prog-id=204 op=LOAD Jan 28 06:18:28.206000 audit[4717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4689 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:28.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536623934633965623464643531653666633731323765313161316463 Jan 28 06:18:28.223568 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:26.889 [INFO][4614] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--hpjqb-eth0 goldmane-666569f655- calico-system 32744aca-35af-454a-aa76-f78a1f5cf3bf 932 0 2026-01-28 06:17:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-hpjqb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali94a89d84ad6 [] [] }} ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:26.889 [INFO][4614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.403 [INFO][4674] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" HandleID="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Workload="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.404 [INFO][4674] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" HandleID="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Workload="localhost-k8s-goldmane--666569f655--hpjqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d10d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-hpjqb", "timestamp":"2026-01-28 06:18:27.403727332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.404 [INFO][4674] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.404 [INFO][4674] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.404 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.446 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.606 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.716 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.768 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.816 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.816 [INFO][4674] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.878 [INFO][4674] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00 Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.930 [INFO][4674] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.992 [INFO][4674] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.993 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" host="localhost" Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.993 [INFO][4674] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:28.263677 containerd[1608]: 2026-01-28 06:18:27.994 [INFO][4674] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" HandleID="k8s-pod-network.4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Workload="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.002 [INFO][4614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hpjqb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"32744aca-35af-454a-aa76-f78a1f5cf3bf", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-hpjqb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94a89d84ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.003 [INFO][4614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.003 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94a89d84ad6 ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.018 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.019 [INFO][4614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hpjqb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"32744aca-35af-454a-aa76-f78a1f5cf3bf", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00", Pod:"goldmane-666569f655-hpjqb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali94a89d84ad6", MAC:"fa:8a:14:af:42:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:28.269938 containerd[1608]: 2026-01-28 06:18:28.163 [INFO][4614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" Namespace="calico-system" Pod="goldmane-666569f655-hpjqb" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hpjqb-eth0" Jan 28 06:18:28.474097 systemd-networkd[1516]: calid3f73915c8f: Gained IPv6LL Jan 28 06:18:28.807035 kubelet[2882]: E0128 06:18:28.803669 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:28.854510 containerd[1608]: time="2026-01-28T06:18:28.853034276Z" level=info msg="connecting to shim 4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00" address="unix:///run/containerd/s/5e4071333a3d3b9679cee253579826bc8dc41158d79772698bc126b276c07076" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:29.063116 systemd-networkd[1516]: cali9754f021288: Link UP Jan 28 06:18:29.073056 systemd-networkd[1516]: cali9754f021288: Gained carrier Jan 28 06:18:29.107931 containerd[1608]: time="2026-01-28T06:18:29.105670715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4ll,Uid:312d579f-6aa0-4444-b03c-b14e6ab72368,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18\"" Jan 28 06:18:29.119904 kubelet[2882]: E0128 06:18:29.118944 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:29.189837 containerd[1608]: time="2026-01-28T06:18:29.189702876Z" level=info msg="CreateContainer within sandbox \"e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.230 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bwxrt-eth0 csi-node-driver- calico-system 8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66 798 0 2026-01-28 06:17:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bwxrt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9754f021288 [] [] }} ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.231 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.653 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" HandleID="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Workload="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.654 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" HandleID="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Workload="localhost-k8s-csi--node--driver--bwxrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bwxrt", "timestamp":"2026-01-28 06:18:27.653826715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.654 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.994 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:27.999 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.120 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.236 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.454 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.551 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.630 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.635 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.651 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023 Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.829 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.926 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.926 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" host="localhost" Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.926 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:29.198167 containerd[1608]: 2026-01-28 06:18:28.926 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" HandleID="k8s-pod-network.15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Workload="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.003 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bwxrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bwxrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9754f021288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.003 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.003 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9754f021288 ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.077 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.090 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bwxrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023", Pod:"csi-node-driver-bwxrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9754f021288", MAC:"ce:02:1e:cb:50:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:29.200913 containerd[1608]: 2026-01-28 06:18:29.165 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" Namespace="calico-system" Pod="csi-node-driver-bwxrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwxrt-eth0" Jan 28 06:18:29.338124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794456938.mount: Deactivated successfully. Jan 28 06:18:29.348004 containerd[1608]: time="2026-01-28T06:18:29.347889963Z" level=info msg="Container 90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:18:29.381010 containerd[1608]: time="2026-01-28T06:18:29.380969096Z" level=info msg="CreateContainer within sandbox \"e6b94c9eb4dd51e6fc7127e11a1dc48dc85291ab82f7e4be7408d55c34735b18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3\"" Jan 28 06:18:29.386046 containerd[1608]: time="2026-01-28T06:18:29.384949468Z" level=info msg="StartContainer for \"90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3\"" Jan 28 06:18:29.401718 containerd[1608]: time="2026-01-28T06:18:29.401684640Z" level=info msg="connecting to shim 90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3" address="unix:///run/containerd/s/539f815abddeb1670a163b64f98545d521219a826a63b16a97dfc52053e48b72" protocol=ttrpc version=3 Jan 28 06:18:29.488656 systemd-networkd[1516]: cali94a89d84ad6: Gained IPv6LL Jan 28 06:18:29.627743 systemd[1]: Started cri-containerd-4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00.scope - libcontainer container 4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00. Jan 28 06:18:29.781673 systemd[1]: Started cri-containerd-90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3.scope - libcontainer container 90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3. Jan 28 06:18:29.802739 systemd-networkd[1516]: vxlan.calico: Link UP Jan 28 06:18:29.802928 systemd-networkd[1516]: vxlan.calico: Gained carrier Jan 28 06:18:29.825829 containerd[1608]: time="2026-01-28T06:18:29.823138483Z" level=info msg="connecting to shim 15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023" address="unix:///run/containerd/s/c0ea6b7cd6cdb53916fb2618ddfefbda9ed7bd2c92c4cb04107937eafcc5df17" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:29.907491 systemd-networkd[1516]: calid2b6691f1d1: Link UP Jan 28 06:18:29.913946 systemd-networkd[1516]: calid2b6691f1d1: Gained carrier Jan 28 06:18:29.962000 audit: BPF prog-id=205 op=LOAD Jan 28 06:18:29.964000 audit: BPF prog-id=206 op=LOAD Jan 28 06:18:29.964000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.964000 audit: BPF prog-id=206 op=UNLOAD Jan 28 06:18:29.964000 audit[4831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.968000 audit: BPF prog-id=207 op=LOAD Jan 28 06:18:29.968000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.969000 audit: BPF prog-id=208 op=LOAD Jan 28 06:18:29.969000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.970000 audit: BPF prog-id=208 op=UNLOAD Jan 28 06:18:29.970000 audit[4831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.970000 audit: BPF prog-id=207 op=UNLOAD Jan 28 06:18:29.970000 audit[4831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.971000 audit: BPF prog-id=209 op=LOAD Jan 28 06:18:29.971000 audit[4831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4812 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463333365346338346531376237363436633631366331623031643330 Jan 28 06:18:29.988000 audit: BPF prog-id=210 op=LOAD Jan 28 06:18:29.991000 audit: BPF prog-id=211 op=LOAD Jan 28 06:18:29.991000 audit[4854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00025c238 a2=98 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.991000 audit: BPF prog-id=211 op=UNLOAD Jan 28 06:18:29.991000 audit[4854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.997912 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:29.997000 audit: BPF prog-id=212 op=LOAD Jan 28 06:18:29.997000 audit[4854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00025c488 a2=98 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.998000 audit: BPF prog-id=213 op=LOAD Jan 28 06:18:29.998000 audit[4854]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00025c218 a2=98 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.998000 audit: BPF prog-id=213 op=UNLOAD Jan 28 06:18:29.998000 audit[4854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.998000 audit: BPF prog-id=212 op=UNLOAD Jan 28 06:18:29.998000 audit[4854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:29.998000 audit: BPF prog-id=214 op=LOAD Jan 28 06:18:29.998000 audit[4854]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00025c6e8 a2=98 a3=0 items=0 ppid=4689 pid=4854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:29.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930633732333332303863663131663963303634333862326638626330 Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:26.958 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0 calico-apiserver-6f7d9cfbc8- calico-apiserver cfce9196-a7e4-4009-b111-fc598ada449a 929 0 2026-01-28 06:17:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7d9cfbc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f7d9cfbc8-cgc65 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2b6691f1d1 [] [] }} ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:27.129 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:27.653 [INFO][4691] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" HandleID="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:27.679 [INFO][4691] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" HandleID="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f7d9cfbc8-cgc65", "timestamp":"2026-01-28 06:18:27.653829481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:27.703 [INFO][4691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:28.949 [INFO][4691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:28.951 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.068 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.387 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.428 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.439 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.634 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.634 [INFO][4691] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.681 [INFO][4691] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6 Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.748 [INFO][4691] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.818 [INFO][4691] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.835 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" host="localhost" Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.842 [INFO][4691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:30.065676 containerd[1608]: 2026-01-28 06:18:29.842 [INFO][4691] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" HandleID="k8s-pod-network.291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.894 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0", GenerateName:"calico-apiserver-6f7d9cfbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cfce9196-a7e4-4009-b111-fc598ada449a", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d9cfbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f7d9cfbc8-cgc65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2b6691f1d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.895 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.895 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2b6691f1d1 ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.912 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.916 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0", GenerateName:"calico-apiserver-6f7d9cfbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cfce9196-a7e4-4009-b111-fc598ada449a", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d9cfbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6", Pod:"calico-apiserver-6f7d9cfbc8-cgc65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2b6691f1d1", MAC:"06:64:27:9a:2d:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:30.071714 containerd[1608]: 2026-01-28 06:18:29.988 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-cgc65" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--cgc65-eth0" Jan 28 06:18:30.329066 systemd[1]: Started cri-containerd-15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023.scope - libcontainer container 15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023. Jan 28 06:18:30.418988 containerd[1608]: time="2026-01-28T06:18:30.418871950Z" level=info msg="StartContainer for \"90c7233208cf11f9c06438b2f8bc0cb7e6a1e45a19394c9c377cc0067612a8d3\" returns successfully" Jan 28 06:18:30.484756 systemd-networkd[1516]: cali9754f021288: Gained IPv6LL Jan 28 06:18:30.534696 systemd-networkd[1516]: cali1f8689441a1: Link UP Jan 28 06:18:30.541981 systemd-networkd[1516]: cali1f8689441a1: Gained carrier Jan 28 06:18:30.551722 containerd[1608]: time="2026-01-28T06:18:30.551680221Z" level=info msg="connecting to shim 291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6" address="unix:///run/containerd/s/f54541279550b26d5cfdb76fd45a0d42d6657ce689d58111858427dcb5c56eb1" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:30.618000 audit: BPF prog-id=215 op=LOAD Jan 28 06:18:30.625000 audit: BPF prog-id=216 op=LOAD Jan 28 06:18:30.625000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000140238 a2=98 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.626000 audit: BPF prog-id=216 op=UNLOAD Jan 28 06:18:30.626000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.627000 audit: BPF prog-id=217 op=LOAD Jan 28 06:18:30.627000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000140488 a2=98 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.630000 audit: BPF prog-id=218 op=LOAD Jan 28 06:18:30.630000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000140218 a2=98 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.630000 audit: BPF prog-id=218 op=UNLOAD Jan 28 06:18:30.630000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.631000 audit: BPF prog-id=217 op=UNLOAD Jan 28 06:18:30.631000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.636000 audit: BPF prog-id=219 op=LOAD Jan 28 06:18:30.636000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001406e8 a2=98 a3=0 items=0 ppid=4885 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135343134633661626633623664313364613261323136653061396639 Jan 28 06:18:30.716893 containerd[1608]: time="2026-01-28T06:18:30.712725394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hpjqb,Uid:32744aca-35af-454a-aa76-f78a1f5cf3bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c33e4c84e17b7646c616c1b01d30ff04d7062658a24fab5420e948dfbfa4a00\"" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:28.115 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6k649-eth0 coredns-668d6bf9bc- kube-system c2d1933c-8984-4259-baec-74c0446170e9 928 0 2026-01-28 06:17:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6k649 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f8689441a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:28.116 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:28.834 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" HandleID="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Workload="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:28.835 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" HandleID="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Workload="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004001b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6k649", "timestamp":"2026-01-28 06:18:28.83486391 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:28.835 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:29.848 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:29.848 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:29.974 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.055 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.101 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.187 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.224 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.231 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.254 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.311 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.358 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.360 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" host="localhost" Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.369 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:30.732006 containerd[1608]: 2026-01-28 06:18:30.369 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" HandleID="k8s-pod-network.c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Workload="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.428 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6k649-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c2d1933c-8984-4259-baec-74c0446170e9", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6k649", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8689441a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.429 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.429 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8689441a1 ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.557 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.563 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6k649-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c2d1933c-8984-4259-baec-74c0446170e9", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c", Pod:"coredns-668d6bf9bc-6k649", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f8689441a1", MAC:"62:ed:de:be:0e:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:30.738055 containerd[1608]: 2026-01-28 06:18:30.657 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6k649" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6k649-eth0" Jan 28 06:18:30.742968 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:30.749739 containerd[1608]: time="2026-01-28T06:18:30.749708975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:18:30.787000 audit: BPF prog-id=220 op=LOAD Jan 28 06:18:30.819733 kernel: kauditd_printk_skb: 196 callbacks suppressed Jan 28 06:18:30.819844 kernel: audit: type=1334 audit(1769581110.787:665): prog-id=220 op=LOAD Jan 28 06:18:30.787000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3c7a7370 a2=98 a3=0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.906659 kernel: audit: type=1300 audit(1769581110.787:665): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3c7a7370 a2=98 a3=0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:31.004798 kernel: audit: type=1327 audit(1769581110.787:665): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:31.004865 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 28 06:18:30.787000 audit: BPF prog-id=220 op=UNLOAD Jan 28 06:18:31.030850 kernel: audit: type=1334 audit(1769581110.787:666): prog-id=220 op=UNLOAD Jan 28 06:18:31.030953 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Jan 28 06:18:31.030983 kernel: audit: type=1300 audit(1769581110.787:666): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd3c7a7340 a3=0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.787000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd3c7a7340 a3=0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.035382 containerd[1608]: time="2026-01-28T06:18:31.033928609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:31.046113 kernel: audit: backlog limit exceeded Jan 28 06:18:31.060862 kubelet[2882]: E0128 06:18:31.058150 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:31.072596 containerd[1608]: time="2026-01-28T06:18:31.068085996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:18:31.073011 containerd[1608]: time="2026-01-28T06:18:31.069652295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:31.073041 kubelet[2882]: E0128 06:18:31.072891 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:18:31.073041 kubelet[2882]: E0128 06:18:31.072938 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:18:31.075735 kubelet[2882]: E0128 06:18:31.073107 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7lvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:31.080001 kubelet[2882]: E0128 06:18:31.079830 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:18:31.116108 systemd[1]: Started cri-containerd-291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6.scope - libcontainer container 291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6. Jan 28 06:18:30.787000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:31.172954 kernel: audit: type=1327 audit(1769581110.787:666): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:31.207672 kernel: audit: type=1334 audit(1769581110.790:667): prog-id=221 op=LOAD Jan 28 06:18:30.790000 audit: BPF prog-id=221 op=LOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3c7a7180 a2=94 a3=54428f items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=221 op=UNLOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd3c7a7180 a2=94 a3=54428f items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=222 op=LOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3c7a71b0 a2=94 a3=2 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=222 op=UNLOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd3c7a71b0 a2=0 a3=2 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=223 op=LOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3c7a6f60 a2=94 a3=4 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=223 op=UNLOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd3c7a6f60 a2=94 a3=4 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=224 op=LOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3c7a7060 a2=94 a3=7ffd3c7a71e0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.790000 audit: BPF prog-id=224 op=UNLOAD Jan 28 06:18:30.790000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd3c7a7060 a2=0 a3=7ffd3c7a71e0 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.790000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.791000 audit: BPF prog-id=225 op=LOAD Jan 28 06:18:30.791000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3c7a6790 a2=94 a3=2 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.791000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.791000 audit: BPF prog-id=225 op=UNLOAD Jan 28 06:18:30.791000 audit[4988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd3c7a6790 a2=0 a3=2 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.791000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.791000 audit: BPF prog-id=226 op=LOAD Jan 28 06:18:30.791000 audit[4988]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3c7a6890 a2=94 a3=30 items=0 ppid=4373 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.791000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 06:18:30.952000 audit: BPF prog-id=227 op=LOAD Jan 28 06:18:30.952000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8d45cb00 a2=98 a3=0 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:30.952000 audit: BPF prog-id=227 op=UNLOAD Jan 28 06:18:30.952000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8d45cad0 a3=0 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:30.955000 audit: BPF prog-id=228 op=LOAD Jan 28 06:18:30.955000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d45c8f0 a2=94 a3=54428f items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:30.955000 audit: BPF prog-id=228 op=UNLOAD Jan 28 06:18:30.955000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d45c8f0 a2=94 a3=54428f items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:30.955000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:31.105000 audit: BPF prog-id=229 op=UNLOAD Jan 28 06:18:31.105000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d45c920 a2=0 a3=2 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:31.287755 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Jan 28 06:18:31.459049 containerd[1608]: time="2026-01-28T06:18:31.453873530Z" level=info msg="connecting to shim c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c" address="unix:///run/containerd/s/2c2cfaeebdba76eb79c5a31300d7ea736c0a0b1de5a7bcfd2c30b1ac535f0ff0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:31.483130 systemd-networkd[1516]: cali07d0f5e1e1e: Link UP Jan 28 06:18:31.485751 systemd-networkd[1516]: cali07d0f5e1e1e: Gained carrier Jan 28 06:18:31.600699 systemd-networkd[1516]: calid2b6691f1d1: Gained IPv6LL Jan 28 06:18:31.608000 audit[5023]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:31.608000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc22fc4cf0 a2=0 a3=7ffc22fc4cdc items=0 ppid=3039 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:31.645000 audit[5023]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:31.645000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc22fc4cf0 a2=0 a3=0 items=0 ppid=3039 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:31.656095 containerd[1608]: time="2026-01-28T06:18:31.655628583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwxrt,Uid:8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66,Namespace:calico-system,Attempt:0,} returns sandbox id \"15414c6abf3b6d13da2a216e0a9f9c69d6067e513b78b163e9fb0ddf6796d023\"" Jan 28 06:18:31.666853 kubelet[2882]: I0128 06:18:31.665863 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jw4ll" podStartSLOduration=82.665842514 podStartE2EDuration="1m22.665842514s" podCreationTimestamp="2026-01-28 06:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:18:31.177978323 +0000 UTC m=+86.360244849" watchObservedRunningTime="2026-01-28 06:18:31.665842514 +0000 UTC m=+86.848109040" Jan 28 06:18:31.681931 containerd[1608]: time="2026-01-28T06:18:31.681767578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:28.278 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0 calico-apiserver-6f7d9cfbc8- calico-apiserver c5caa60c-0db4-4584-8ceb-7bfff587bf9e 927 0 2026-01-28 06:17:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7d9cfbc8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f7d9cfbc8-r2r59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07d0f5e1e1e [] [] }} ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:28.278 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:29.406 [INFO][4795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" HandleID="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:29.420 [INFO][4795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" HandleID="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f7d9cfbc8-r2r59", "timestamp":"2026-01-28 06:18:29.406945004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:29.420 [INFO][4795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.377 [INFO][4795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.377 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.434 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.663 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.892 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:30.953 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.114 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.155 [INFO][4795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.241 [INFO][4795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6 Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.285 [INFO][4795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.419 [INFO][4795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.428 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" host="localhost" Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.428 [INFO][4795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 06:18:31.710824 containerd[1608]: 2026-01-28 06:18:31.428 [INFO][4795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" HandleID="k8s-pod-network.a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Workload="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.460 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0", GenerateName:"calico-apiserver-6f7d9cfbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5caa60c-0db4-4584-8ceb-7bfff587bf9e", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d9cfbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f7d9cfbc8-r2r59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07d0f5e1e1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.460 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.461 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07d0f5e1e1e ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.484 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.500 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0", GenerateName:"calico-apiserver-6f7d9cfbc8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5caa60c-0db4-4584-8ceb-7bfff587bf9e", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 6, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d9cfbc8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6", Pod:"calico-apiserver-6f7d9cfbc8-r2r59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07d0f5e1e1e", MAC:"3e:32:b8:e6:17:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 06:18:31.715875 containerd[1608]: 2026-01-28 06:18:31.671 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d9cfbc8-r2r59" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d9cfbc8--r2r59-eth0" Jan 28 06:18:31.787000 audit: BPF prog-id=230 op=LOAD Jan 28 06:18:31.794983 containerd[1608]: time="2026-01-28T06:18:31.794080706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:31.797663 containerd[1608]: time="2026-01-28T06:18:31.797173198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:18:31.797000 audit: BPF prog-id=231 op=LOAD Jan 28 06:18:31.797000 audit[4979]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.801899 containerd[1608]: time="2026-01-28T06:18:31.797813171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:31.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.806830 kubelet[2882]: E0128 06:18:31.804736 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:18:31.806830 kubelet[2882]: E0128 06:18:31.804791 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:18:31.806830 kubelet[2882]: E0128 06:18:31.804906 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:31.811000 audit: BPF prog-id=231 op=UNLOAD Jan 28 06:18:31.811000 audit[4979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.815791 containerd[1608]: time="2026-01-28T06:18:31.814769226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:18:31.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.823000 audit: BPF prog-id=232 op=LOAD Jan 28 06:18:31.823000 audit[4979]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.828000 audit: BPF prog-id=233 op=LOAD Jan 28 06:18:31.828000 audit[4979]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.828000 audit: BPF prog-id=233 op=UNLOAD Jan 28 06:18:31.828000 audit[4979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.828000 audit: BPF prog-id=232 op=UNLOAD Jan 28 06:18:31.828000 audit[4979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.867054 systemd[1]: Started cri-containerd-c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c.scope - libcontainer container c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c. Jan 28 06:18:31.828000 audit: BPF prog-id=234 op=LOAD Jan 28 06:18:31.828000 audit[4979]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=4961 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316332313366396633313061353633353463383336386330653731 Jan 28 06:18:31.901846 containerd[1608]: time="2026-01-28T06:18:31.901741899Z" level=info msg="connecting to shim a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6" address="unix:///run/containerd/s/d23e40358e7bc4270f24b38661699cc5575bb2c866c04d7835572563f63fecf8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 06:18:31.902734 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:31.905931 containerd[1608]: time="2026-01-28T06:18:31.905897681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:31.955706 containerd[1608]: time="2026-01-28T06:18:31.955099798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:18:31.956143 containerd[1608]: time="2026-01-28T06:18:31.955804440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:31.959925 kubelet[2882]: E0128 06:18:31.959009 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:18:31.959925 kubelet[2882]: E0128 06:18:31.959778 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:18:31.965765 kubelet[2882]: E0128 06:18:31.961925 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:31.968970 kubelet[2882]: E0128 06:18:31.968818 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:18:31.974000 audit: BPF prog-id=235 op=LOAD Jan 28 06:18:31.980000 audit: BPF prog-id=236 op=LOAD Jan 28 06:18:31.980000 audit[5049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c4238 a2=98 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.980000 audit: BPF prog-id=236 op=UNLOAD Jan 28 06:18:31.980000 audit[5049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.985000 audit: BPF prog-id=237 op=LOAD Jan 28 06:18:31.985000 audit[5049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c4488 a2=98 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.985000 audit: BPF prog-id=238 op=LOAD Jan 28 06:18:31.985000 audit[5049]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001c4218 a2=98 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.985000 audit: BPF prog-id=238 op=UNLOAD Jan 28 06:18:31.985000 audit[5049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.985000 audit: BPF prog-id=237 op=UNLOAD Jan 28 06:18:31.985000 audit[5049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.985000 audit: BPF prog-id=239 op=LOAD Jan 28 06:18:31.985000 audit[5049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001c46e8 a2=98 a3=0 items=0 ppid=5036 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:31.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334343434323762393862316133353539623633656666393236633131 Jan 28 06:18:31.994880 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:32.090128 kubelet[2882]: E0128 06:18:32.090081 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:18:32.096180 kubelet[2882]: E0128 06:18:32.096159 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:32.098675 kubelet[2882]: E0128 06:18:32.097963 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:18:32.281965 kubelet[2882]: E0128 06:18:32.281830 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:32.338057 systemd[1]: Started cri-containerd-a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6.scope - libcontainer container a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6. Jan 28 06:18:32.420043 containerd[1608]: time="2026-01-28T06:18:32.419998668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k649,Uid:c2d1933c-8984-4259-baec-74c0446170e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c\"" Jan 28 06:18:32.438170 kubelet[2882]: E0128 06:18:32.437104 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:32.461123 containerd[1608]: time="2026-01-28T06:18:32.460018517Z" level=info msg="CreateContainer within sandbox \"c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 06:18:32.495931 systemd-networkd[1516]: cali1f8689441a1: Gained IPv6LL Jan 28 06:18:32.630000 audit[5131]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:32.630000 audit[5131]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd92b6ffb0 a2=0 a3=7ffd92b6ff9c items=0 ppid=3039 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:32.589883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242450644.mount: Deactivated successfully. Jan 28 06:18:32.674359 containerd[1608]: time="2026-01-28T06:18:32.673767464Z" level=info msg="Container f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953: CDI devices from CRI Config.CDIDevices: []" Jan 28 06:18:32.683000 audit[5131]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:32.683000 audit[5131]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd92b6ffb0 a2=0 a3=0 items=0 ppid=3039 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:32.707138 containerd[1608]: time="2026-01-28T06:18:32.707110334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-cgc65,Uid:cfce9196-a7e4-4009-b111-fc598ada449a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"291c213f9f310a56354c8368c0e713a17adf0681df20ed34c0d33dba40092ef6\"" Jan 28 06:18:32.726000 audit: BPF prog-id=240 op=LOAD Jan 28 06:18:32.731000 audit: BPF prog-id=241 op=LOAD Jan 28 06:18:32.731000 audit[5094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.731000 audit: BPF prog-id=241 op=UNLOAD Jan 28 06:18:32.731000 audit[5094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.732000 audit: BPF prog-id=242 op=LOAD Jan 28 06:18:32.732000 audit[5094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.732000 audit: BPF prog-id=243 op=LOAD Jan 28 06:18:32.732000 audit[5094]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.732000 audit: BPF prog-id=243 op=UNLOAD Jan 28 06:18:32.732000 audit[5094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.732000 audit: BPF prog-id=242 op=UNLOAD Jan 28 06:18:32.732000 audit[5094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.732000 audit: BPF prog-id=244 op=LOAD Jan 28 06:18:32.732000 audit[5094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5073 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139336364316631396235613832363264326566653362653936343034 Jan 28 06:18:32.749112 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 06:18:32.753806 update_engine[1584]: I20260128 06:18:32.746588 1584 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:18:32.756984 containerd[1608]: time="2026-01-28T06:18:32.756955490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:18:32.760782 update_engine[1584]: I20260128 06:18:32.760756 1584 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:18:32.761714 containerd[1608]: time="2026-01-28T06:18:32.761688529Z" level=info msg="CreateContainer within sandbox \"c444427b98b1a3559b63eff926c1197f1235b4367c299d1150626289641bf68c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953\"" Jan 28 06:18:32.775792 update_engine[1584]: I20260128 06:18:32.772086 1584 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:18:32.785043 containerd[1608]: time="2026-01-28T06:18:32.784998650Z" level=info msg="StartContainer for \"f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953\"" Jan 28 06:18:32.803959 containerd[1608]: time="2026-01-28T06:18:32.803918243Z" level=info msg="connecting to shim f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953" address="unix:///run/containerd/s/2c2cfaeebdba76eb79c5a31300d7ea736c0a0b1de5a7bcfd2c30b1ac535f0ff0" protocol=ttrpc version=3 Jan 28 06:18:32.815724 update_engine[1584]: E20260128 06:18:32.814033 1584 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 06:18:32.815724 update_engine[1584]: I20260128 06:18:32.814185 1584 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 06:18:32.815724 update_engine[1584]: I20260128 06:18:32.815141 1584 omaha_request_action.cc:617] Omaha request response: Jan 28 06:18:32.819127 update_engine[1584]: E20260128 06:18:32.819079 1584 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 06:18:32.909676 containerd[1608]: time="2026-01-28T06:18:32.903024146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:32.920000 audit[5134]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=5134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:32.920000 audit[5134]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa90c2030 a2=0 a3=7fffa90c201c items=0 ppid=3039 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:32.925732 containerd[1608]: time="2026-01-28T06:18:32.924901380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:18:32.931000 audit[5134]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=5134 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:32.931000 audit[5134]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffa90c2030 a2=0 a3=7fffa90c201c items=0 ppid=3039 pid=5134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:32.943889 systemd-networkd[1516]: cali07d0f5e1e1e: Gained IPv6LL Jan 28 06:18:32.951145 containerd[1608]: time="2026-01-28T06:18:32.928733010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:32.952000 audit: BPF prog-id=245 op=LOAD Jan 28 06:18:32.952000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d45c7e0 a2=94 a3=1 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.952000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.953000 audit: BPF prog-id=245 op=UNLOAD Jan 28 06:18:32.953000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d45c7e0 a2=94 a3=1 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.953000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.963805 kubelet[2882]: E0128 06:18:32.959759 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:32.963805 kubelet[2882]: E0128 06:18:32.959811 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:32.963805 kubelet[2882]: E0128 06:18:32.962152 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqvxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:32.968000 audit: BPF prog-id=246 op=LOAD Jan 28 06:18:32.968000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d45c7d0 a2=94 a3=4 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.968000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.968000 audit: BPF prog-id=246 op=UNLOAD Jan 28 06:18:32.968000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d45c7d0 a2=0 a3=4 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.968000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.969000 audit: BPF prog-id=247 op=LOAD Jan 28 06:18:32.969000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8d45c630 a2=94 a3=5 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.969000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.969000 audit: BPF prog-id=247 op=UNLOAD Jan 28 06:18:32.969000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd8d45c630 a2=0 a3=5 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.969000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.969000 audit: BPF prog-id=248 op=LOAD Jan 28 06:18:32.969000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d45c850 a2=94 a3=6 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.969000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.969000 audit: BPF prog-id=248 op=UNLOAD Jan 28 06:18:32.969000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d45c850 a2=0 a3=6 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.969000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.970000 audit: BPF prog-id=249 op=LOAD Jan 28 06:18:32.970000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d45c000 a2=94 a3=88 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.970000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.970000 audit: BPF prog-id=250 op=LOAD Jan 28 06:18:32.970000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8d45be80 a2=94 a3=2 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.970000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.970000 audit: BPF prog-id=250 op=UNLOAD Jan 28 06:18:32.970000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8d45beb0 a2=0 a3=7ffd8d45bfb0 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.970000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.973000 audit: BPF prog-id=249 op=UNLOAD Jan 28 06:18:32.973000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=125ddd10 a2=0 a3=6ab6a5fc6a5bbd29 items=0 ppid=4373 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:32.973000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 06:18:32.983091 kubelet[2882]: E0128 06:18:32.980741 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:18:32.989760 update_engine[1584]: I20260128 06:18:32.989697 1584 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 06:18:32.989949 update_engine[1584]: I20260128 06:18:32.989928 1584 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:18:32.989999 update_engine[1584]: I20260128 06:18:32.989986 1584 update_attempter.cc:306] Processing Done. Jan 28 06:18:32.990670 update_engine[1584]: E20260128 06:18:32.990172 1584 update_attempter.cc:619] Update failed. Jan 28 06:18:32.990889 update_engine[1584]: I20260128 06:18:32.990864 1584 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 06:18:32.990949 update_engine[1584]: I20260128 06:18:32.990934 1584 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 06:18:32.990992 update_engine[1584]: I20260128 06:18:32.990980 1584 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 06:18:32.991140 update_engine[1584]: I20260128 06:18:32.991123 1584 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 06:18:32.991711 update_engine[1584]: I20260128 06:18:32.991688 1584 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 06:18:32.991764 update_engine[1584]: I20260128 06:18:32.991751 1584 omaha_request_action.cc:272] Request: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991764 update_engine[1584]: Jan 28 06:18:32.991910 update_engine[1584]: I20260128 06:18:32.991896 1584 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 06:18:32.991973 update_engine[1584]: I20260128 06:18:32.991961 1584 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 06:18:32.995771 update_engine[1584]: I20260128 06:18:32.995741 1584 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 06:18:33.010669 update_engine[1584]: E20260128 06:18:33.009928 1584 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010041 1584 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010053 1584 omaha_request_action.cc:617] Omaha request response: Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010062 1584 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010069 1584 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010076 1584 update_attempter.cc:306] Processing Done. Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010084 1584 update_attempter.cc:310] Error event sent. Jan 28 06:18:33.010669 update_engine[1584]: I20260128 06:18:33.010097 1584 update_check_scheduler.cc:74] Next update check in 47m44s Jan 28 06:18:33.073668 locksmithd[1676]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 06:18:33.074045 locksmithd[1676]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 06:18:33.078129 systemd[1]: Started cri-containerd-f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953.scope - libcontainer container f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953. Jan 28 06:18:33.116000 audit: BPF prog-id=226 op=UNLOAD Jan 28 06:18:33.116000 audit[4373]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0009216c0 a2=0 a3=0 items=0 ppid=4346 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.116000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 06:18:33.222820 kubelet[2882]: E0128 06:18:33.220686 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:18:33.222820 kubelet[2882]: E0128 06:18:33.220741 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:33.222820 kubelet[2882]: E0128 06:18:33.222809 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:18:33.244000 audit: BPF prog-id=251 op=LOAD Jan 28 06:18:33.246000 audit: BPF prog-id=252 op=LOAD Jan 28 06:18:33.246000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.249000 audit: BPF prog-id=252 op=UNLOAD Jan 28 06:18:33.249000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.251000 audit: BPF prog-id=253 op=LOAD Jan 28 06:18:33.251000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.251000 audit: BPF prog-id=254 op=LOAD Jan 28 06:18:33.251000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.252000 audit: BPF prog-id=254 op=UNLOAD Jan 28 06:18:33.252000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.252000 audit: BPF prog-id=253 op=UNLOAD Jan 28 06:18:33.252000 audit[5133]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.253000 audit: BPF prog-id=255 op=LOAD Jan 28 06:18:33.253000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5036 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635346532613330666631376638303135326637356131336166613234 Jan 28 06:18:33.381887 containerd[1608]: time="2026-01-28T06:18:33.381131120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d9cfbc8-r2r59,Uid:c5caa60c-0db4-4584-8ceb-7bfff587bf9e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a93cd1f19b5a8262d2efe3be9640497d79ebf97044899b3fe691e8509131c4f6\"" Jan 28 06:18:33.402660 containerd[1608]: time="2026-01-28T06:18:33.401735982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:18:33.471163 containerd[1608]: time="2026-01-28T06:18:33.470122147Z" level=info msg="StartContainer for \"f54e2a30ff17f80152f75a13afa245562b8d11df553c20e85e9b2b39d6f43953\" returns successfully" Jan 28 06:18:33.653681 containerd[1608]: time="2026-01-28T06:18:33.653630982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:33.655000 audit[5182]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:33.655000 audit[5182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff979c78f0 a2=0 a3=7fff979c78dc items=0 ppid=3039 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:33.688000 audit[5182]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:33.688000 audit[5182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff979c78f0 a2=0 a3=7fff979c78dc items=0 ppid=3039 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:33.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:33.692776 containerd[1608]: time="2026-01-28T06:18:33.692072552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:18:33.695662 containerd[1608]: time="2026-01-28T06:18:33.694759418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:33.695752 kubelet[2882]: E0128 06:18:33.695698 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:33.695752 kubelet[2882]: E0128 06:18:33.695745 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:33.695911 kubelet[2882]: E0128 06:18:33.695857 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:33.704841 kubelet[2882]: E0128 06:18:33.704137 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:18:34.102000 audit[5209]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=5209 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:18:34.102000 audit[5209]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc9bab1780 a2=0 a3=7ffc9bab176c items=0 ppid=4373 pid=5209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.102000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:18:34.121000 audit[5212]: NETFILTER_CFG table=mangle:132 family=2 entries=16 op=nft_register_chain pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:18:34.121000 audit[5212]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffb18af150 a2=0 a3=7fffb18af13c items=0 ppid=4373 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:18:34.195000 audit[5207]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:18:34.195000 audit[5207]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc75f7a290 a2=0 a3=7ffc75f7a27c items=0 ppid=4373 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.195000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:18:34.329929 kubelet[2882]: E0128 06:18:34.329885 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:34.220000 audit[5210]: NETFILTER_CFG table=filter:134 family=2 entries=226 op=nft_register_chain pid=5210 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:18:34.220000 audit[5210]: SYSCALL arch=c000003e syscall=46 success=yes exit=131252 a0=3 a1=7ffd958a7d40 a2=0 a3=7ffd958a7d2c items=0 ppid=4373 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:18:34.360758 kubelet[2882]: E0128 06:18:34.359995 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:18:34.363109 kubelet[2882]: E0128 06:18:34.363073 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:18:34.515934 kubelet[2882]: I0128 06:18:34.515127 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6k649" podStartSLOduration=85.51510147 podStartE2EDuration="1m25.51510147s" podCreationTimestamp="2026-01-28 06:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:18:34.513177608 +0000 UTC m=+89.695444134" watchObservedRunningTime="2026-01-28 06:18:34.51510147 +0000 UTC m=+89.697367997" Jan 28 06:18:34.580000 audit[5222]: NETFILTER_CFG table=filter:135 family=2 entries=121 op=nft_register_chain pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 06:18:34.580000 audit[5222]: SYSCALL arch=c000003e syscall=46 success=yes exit=68664 a0=3 a1=7ffd5be85040 a2=0 a3=7ffd5be8502c items=0 ppid=4373 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.580000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 06:18:34.852000 audit[5225]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:34.852000 audit[5225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcfe24f770 a2=0 a3=7ffcfe24f75c items=0 ppid=3039 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:34.873000 audit[5225]: NETFILTER_CFG table=nat:137 family=2 entries=44 op=nft_register_rule pid=5225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:34.873000 audit[5225]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcfe24f770 a2=0 a3=7ffcfe24f75c items=0 ppid=3039 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:34.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:35.365748 kubelet[2882]: E0128 06:18:35.365029 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:35.371961 kubelet[2882]: E0128 06:18:35.371778 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:18:35.642000 audit[5227]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:35.642000 audit[5227]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb64cab60 a2=0 a3=7fffb64cab4c items=0 ppid=3039 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:35.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:35.716000 audit[5227]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=5227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:18:35.716000 audit[5227]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffb64cab60 a2=0 a3=7fffb64cab4c items=0 ppid=3039 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:35.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:18:36.378064 kubelet[2882]: E0128 06:18:36.378022 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:37.388666 kubelet[2882]: E0128 06:18:37.388157 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:40.277860 containerd[1608]: time="2026-01-28T06:18:40.277504675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:18:40.399964 containerd[1608]: time="2026-01-28T06:18:40.399920384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:40.404736 containerd[1608]: time="2026-01-28T06:18:40.404556510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:18:40.404736 containerd[1608]: time="2026-01-28T06:18:40.404636129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:40.405534 kubelet[2882]: E0128 06:18:40.405127 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:18:40.406666 kubelet[2882]: E0128 06:18:40.405185 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:18:40.407423 kubelet[2882]: E0128 06:18:40.406903 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:40.414582 containerd[1608]: time="2026-01-28T06:18:40.408541988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:18:40.416131 kubelet[2882]: E0128 06:18:40.415842 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:40.483500 containerd[1608]: time="2026-01-28T06:18:40.482953491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:40.487658 containerd[1608]: time="2026-01-28T06:18:40.487468808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:18:40.487723 containerd[1608]: time="2026-01-28T06:18:40.487678228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:40.488701 kubelet[2882]: E0128 06:18:40.488495 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:18:40.488701 kubelet[2882]: E0128 06:18:40.488649 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:18:40.488988 kubelet[2882]: E0128 06:18:40.488791 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b0ef5b7dffa04fdda9d5c4de471b2ce8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:40.492124 containerd[1608]: time="2026-01-28T06:18:40.492011689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:18:40.565738 containerd[1608]: time="2026-01-28T06:18:40.564642223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:40.568972 containerd[1608]: time="2026-01-28T06:18:40.568912731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:18:40.568972 containerd[1608]: time="2026-01-28T06:18:40.569011876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:40.570068 kubelet[2882]: E0128 06:18:40.570019 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:18:40.570154 kubelet[2882]: E0128 06:18:40.570084 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:18:40.570912 kubelet[2882]: E0128 06:18:40.570705 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:40.572941 kubelet[2882]: E0128 06:18:40.572725 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:18:45.277835 containerd[1608]: time="2026-01-28T06:18:45.277473866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:18:45.351844 containerd[1608]: time="2026-01-28T06:18:45.351150079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:45.355576 containerd[1608]: time="2026-01-28T06:18:45.355175049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:18:45.355576 containerd[1608]: time="2026-01-28T06:18:45.355544348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:45.356745 kubelet[2882]: E0128 06:18:45.355934 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:18:45.356745 kubelet[2882]: E0128 06:18:45.356013 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:18:45.356745 kubelet[2882]: E0128 06:18:45.356182 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7lvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:45.357851 kubelet[2882]: E0128 06:18:45.357571 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:18:47.277129 containerd[1608]: time="2026-01-28T06:18:47.276819754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:18:47.361559 containerd[1608]: time="2026-01-28T06:18:47.360993505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:47.364672 containerd[1608]: time="2026-01-28T06:18:47.364562951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:18:47.364672 containerd[1608]: time="2026-01-28T06:18:47.364660813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:47.365169 kubelet[2882]: E0128 06:18:47.364974 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:18:47.365169 kubelet[2882]: E0128 06:18:47.365036 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:18:47.365169 kubelet[2882]: E0128 06:18:47.365145 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:47.370020 containerd[1608]: time="2026-01-28T06:18:47.369638089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:18:47.443876 containerd[1608]: time="2026-01-28T06:18:47.443736068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:47.447291 containerd[1608]: time="2026-01-28T06:18:47.446745607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:18:47.447291 containerd[1608]: time="2026-01-28T06:18:47.446905375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:47.447775 kubelet[2882]: E0128 06:18:47.447693 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:18:47.447775 kubelet[2882]: E0128 06:18:47.447742 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:18:47.447870 kubelet[2882]: E0128 06:18:47.447843 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:47.450417 kubelet[2882]: E0128 06:18:47.450006 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:18:48.277024 containerd[1608]: time="2026-01-28T06:18:48.276964494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:18:48.343488 containerd[1608]: time="2026-01-28T06:18:48.342746311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:48.348177 containerd[1608]: time="2026-01-28T06:18:48.347489332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:18:48.348177 containerd[1608]: time="2026-01-28T06:18:48.347595580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:48.348710 kubelet[2882]: E0128 06:18:48.347737 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:48.348710 kubelet[2882]: E0128 06:18:48.347788 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:48.348710 kubelet[2882]: E0128 06:18:48.347923 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqvxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:48.350152 kubelet[2882]: E0128 06:18:48.349812 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:18:50.275086 containerd[1608]: time="2026-01-28T06:18:50.274908663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:18:50.343082 containerd[1608]: time="2026-01-28T06:18:50.342883840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:18:50.351118 containerd[1608]: time="2026-01-28T06:18:50.350025470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:18:50.351118 containerd[1608]: time="2026-01-28T06:18:50.350182562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:18:50.351677 kubelet[2882]: E0128 06:18:50.351041 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:50.351677 kubelet[2882]: E0128 06:18:50.351122 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:18:50.352141 kubelet[2882]: E0128 06:18:50.351688 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:18:50.353920 kubelet[2882]: E0128 06:18:50.353863 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:18:52.646802 kubelet[2882]: E0128 06:18:52.646705 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:18:53.277359 kubelet[2882]: E0128 06:18:53.276113 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:18:55.241747 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:52074.service - OpenSSH per-connection server daemon (10.0.0.1:52074). Jan 28 06:18:55.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.25:22-10.0.0.1:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:18:55.248841 kernel: kauditd_printk_skb: 225 callbacks suppressed Jan 28 06:18:55.248995 kernel: audit: type=1130 audit(1769581135.241:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.25:22-10.0.0.1:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:18:55.293730 kubelet[2882]: E0128 06:18:55.292544 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:18:55.463000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.464687 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 52074 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:18:55.468703 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:18:55.480608 systemd-logind[1580]: New session 9 of user core. Jan 28 06:18:55.466000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.519854 kernel: audit: type=1101 audit(1769581135.463:746): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.519947 kernel: audit: type=1103 audit(1769581135.466:747): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.466000 audit[5286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce3198ef0 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:55.541031 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 06:18:55.570668 kernel: audit: type=1006 audit(1769581135.466:748): pid=5286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 28 06:18:55.570765 kernel: audit: type=1300 audit(1769581135.466:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce3198ef0 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:18:55.570789 kernel: audit: type=1327 audit(1769581135.466:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:18:55.466000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:18:55.547000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.618931 kernel: audit: type=1105 audit(1769581135.547:749): pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.550000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.660082 kernel: audit: type=1103 audit(1769581135.550:750): pid=5290 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.886943 sshd[5290]: Connection closed by 10.0.0.1 port 52074 Jan 28 06:18:55.887855 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 28 06:18:55.891000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.896726 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:52074.service: Deactivated successfully. Jan 28 06:18:55.901863 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 06:18:55.904699 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 28 06:18:55.908991 systemd-logind[1580]: Removed session 9. Jan 28 06:18:55.891000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.959623 kernel: audit: type=1106 audit(1769581135.891:751): pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.959727 kernel: audit: type=1104 audit(1769581135.891:752): pid=5286 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:18:55.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.25:22-10.0.0.1:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:18:58.275845 kubelet[2882]: E0128 06:18:58.275780 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:18:59.277611 kubelet[2882]: E0128 06:18:59.276876 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:19:00.910803 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:52084.service - OpenSSH per-connection server daemon (10.0.0.1:52084). Jan 28 06:19:00.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:00.920665 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:00.920727 kernel: audit: type=1130 audit(1769581140.910:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:01.041000 audit[5308]: USER_ACCT pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.042878 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 52084 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:01.046528 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:01.060539 systemd-logind[1580]: New session 10 of user core. Jan 28 06:19:01.043000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.103574 kernel: audit: type=1101 audit(1769581141.041:755): pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.103676 kernel: audit: type=1103 audit(1769581141.043:756): pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.129146 kernel: audit: type=1006 audit(1769581141.043:757): pid=5308 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 28 06:19:01.043000 audit[5308]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdb650fa0 a2=3 a3=0 items=0 ppid=1 pid=5308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:01.157624 kernel: audit: type=1300 audit(1769581141.043:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdb650fa0 a2=3 a3=0 items=0 ppid=1 pid=5308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:01.157706 kernel: audit: type=1327 audit(1769581141.043:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:01.043000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:01.171784 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 06:19:01.176000 audit[5308]: USER_START pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.180000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.257126 kernel: audit: type=1105 audit(1769581141.176:758): pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.257355 kernel: audit: type=1103 audit(1769581141.180:759): pid=5312 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.361406 sshd[5312]: Connection closed by 10.0.0.1 port 52084 Jan 28 06:19:01.362150 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:01.366000 audit[5308]: USER_END pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.370953 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:52084.service: Deactivated successfully. Jan 28 06:19:01.375043 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 06:19:01.379062 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 28 06:19:01.382696 systemd-logind[1580]: Removed session 10. Jan 28 06:19:01.366000 audit[5308]: CRED_DISP pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.445568 kernel: audit: type=1106 audit(1769581141.366:760): pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.445696 kernel: audit: type=1104 audit(1769581141.366:761): pid=5308 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:01.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:03.285345 kubelet[2882]: E0128 06:19:03.283623 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:19:04.277959 kubelet[2882]: E0128 06:19:04.277492 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:19:04.278566 containerd[1608]: time="2026-01-28T06:19:04.277884676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:19:04.360289 containerd[1608]: time="2026-01-28T06:19:04.360045182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:04.364727 containerd[1608]: time="2026-01-28T06:19:04.364663092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:19:04.365044 containerd[1608]: time="2026-01-28T06:19:04.364726624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:04.366022 kubelet[2882]: E0128 06:19:04.365670 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:19:04.366022 kubelet[2882]: E0128 06:19:04.365875 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:19:04.366745 kubelet[2882]: E0128 06:19:04.366515 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:04.368539 kubelet[2882]: E0128 06:19:04.368121 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:19:05.277539 kubelet[2882]: E0128 06:19:05.276792 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:06.384856 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:54800.service - OpenSSH per-connection server daemon (10.0.0.1:54800). Jan 28 06:19:06.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:54800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:06.393452 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:06.393502 kernel: audit: type=1130 audit(1769581146.384:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:54800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:06.529000 audit[5331]: USER_ACCT pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.535114 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:06.542029 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 54800 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:06.548845 systemd-logind[1580]: New session 11 of user core. Jan 28 06:19:06.561696 kernel: audit: type=1101 audit(1769581146.529:764): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.561784 kernel: audit: type=1103 audit(1769581146.531:765): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.531000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.595644 kernel: audit: type=1006 audit(1769581146.532:766): pid=5331 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 06:19:06.613102 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 06:19:06.532000 audit[5331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1557530 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:06.656503 kernel: audit: type=1300 audit(1769581146.532:766): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1557530 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:06.532000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:06.670682 kernel: audit: type=1327 audit(1769581146.532:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:06.620000 audit[5331]: USER_START pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.720525 kernel: audit: type=1105 audit(1769581146.620:767): pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.658000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.751344 kernel: audit: type=1103 audit(1769581146.658:768): pid=5335 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.900555 sshd[5335]: Connection closed by 10.0.0.1 port 54800 Jan 28 06:19:06.901183 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:06.904000 audit[5331]: USER_END pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.914058 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 28 06:19:06.916362 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:54800.service: Deactivated successfully. Jan 28 06:19:06.921587 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 06:19:06.925138 systemd-logind[1580]: Removed session 11. Jan 28 06:19:06.905000 audit[5331]: CRED_DISP pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.965971 kernel: audit: type=1106 audit(1769581146.904:769): pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.966091 kernel: audit: type=1104 audit(1769581146.905:770): pid=5331 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:06.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:54800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:10.280771 containerd[1608]: time="2026-01-28T06:19:10.280720038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:19:10.358447 containerd[1608]: time="2026-01-28T06:19:10.357734657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:10.361173 containerd[1608]: time="2026-01-28T06:19:10.360765252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:19:10.361173 containerd[1608]: time="2026-01-28T06:19:10.360935758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:10.361929 kubelet[2882]: E0128 06:19:10.361159 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:19:10.361929 kubelet[2882]: E0128 06:19:10.361817 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:19:10.363043 kubelet[2882]: E0128 06:19:10.361983 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b0ef5b7dffa04fdda9d5c4de471b2ce8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:10.366599 containerd[1608]: time="2026-01-28T06:19:10.365760142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:19:10.441175 containerd[1608]: time="2026-01-28T06:19:10.440945099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:10.444488 containerd[1608]: time="2026-01-28T06:19:10.444171506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:19:10.444585 containerd[1608]: time="2026-01-28T06:19:10.444525943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:10.445083 kubelet[2882]: E0128 06:19:10.444800 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:19:10.445083 kubelet[2882]: E0128 06:19:10.444868 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:19:10.445083 kubelet[2882]: E0128 06:19:10.445014 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:10.447444 kubelet[2882]: E0128 06:19:10.447139 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:19:11.277871 containerd[1608]: time="2026-01-28T06:19:11.277502108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:19:11.365288 containerd[1608]: time="2026-01-28T06:19:11.365097215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:11.374169 containerd[1608]: time="2026-01-28T06:19:11.374097202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:19:11.374588 containerd[1608]: time="2026-01-28T06:19:11.374448583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:11.374631 kubelet[2882]: E0128 06:19:11.374598 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:19:11.375591 kubelet[2882]: E0128 06:19:11.374645 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:19:11.375591 kubelet[2882]: E0128 06:19:11.374755 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:11.381527 containerd[1608]: time="2026-01-28T06:19:11.381138940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:19:11.464128 containerd[1608]: time="2026-01-28T06:19:11.463817703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:11.470167 containerd[1608]: time="2026-01-28T06:19:11.469916065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:19:11.470676 containerd[1608]: time="2026-01-28T06:19:11.470473901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:11.470719 kubelet[2882]: E0128 06:19:11.470156 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:19:11.470719 kubelet[2882]: E0128 06:19:11.470436 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:19:11.470719 kubelet[2882]: E0128 06:19:11.470555 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:11.474496 kubelet[2882]: E0128 06:19:11.473821 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:19:11.929061 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:54802.service - OpenSSH per-connection server daemon (10.0.0.1:54802). Jan 28 06:19:11.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:54802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:11.937656 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:11.937755 kernel: audit: type=1130 audit(1769581151.928:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:54802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:12.075000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.078135 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 54802 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:12.081821 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:12.099546 systemd-logind[1580]: New session 12 of user core. Jan 28 06:19:12.078000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.146650 kernel: audit: type=1101 audit(1769581152.075:773): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.146792 kernel: audit: type=1103 audit(1769581152.078:774): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.146830 kernel: audit: type=1006 audit(1769581152.078:775): pid=5349 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 06:19:12.078000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc326a2130 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:12.162549 kernel: audit: type=1300 audit(1769581152.078:775): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc326a2130 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:12.078000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:12.200729 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 06:19:12.224970 kernel: audit: type=1327 audit(1769581152.078:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:12.226007 kernel: audit: type=1105 audit(1769581152.208:776): pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.208000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.265686 kernel: audit: type=1103 audit(1769581152.212:777): pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.212000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.277991 containerd[1608]: time="2026-01-28T06:19:12.277633450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:19:12.374481 containerd[1608]: time="2026-01-28T06:19:12.374338945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:12.378005 containerd[1608]: time="2026-01-28T06:19:12.377863256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:19:12.378005 containerd[1608]: time="2026-01-28T06:19:12.377954385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:12.379045 kubelet[2882]: E0128 06:19:12.378913 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:19:12.379045 kubelet[2882]: E0128 06:19:12.379038 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:19:12.379627 kubelet[2882]: E0128 06:19:12.379157 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7lvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:12.382868 kubelet[2882]: E0128 06:19:12.382827 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:19:12.446628 sshd[5353]: Connection closed by 10.0.0.1 port 54802 Jan 28 06:19:12.447118 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:12.449000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.457884 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:54802.service: Deactivated successfully. Jan 28 06:19:12.466004 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 06:19:12.468765 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 28 06:19:12.473968 systemd-logind[1580]: Removed session 12. Jan 28 06:19:12.450000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.531684 kernel: audit: type=1106 audit(1769581152.449:778): pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.531798 kernel: audit: type=1104 audit(1769581152.450:779): pid=5349 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:12.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:54802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:17.299162 containerd[1608]: time="2026-01-28T06:19:17.297478446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:19:17.399589 containerd[1608]: time="2026-01-28T06:19:17.399000346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:17.401902 containerd[1608]: time="2026-01-28T06:19:17.401666789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:19:17.401902 containerd[1608]: time="2026-01-28T06:19:17.401759073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:17.403015 kubelet[2882]: E0128 06:19:17.402805 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:19:17.403015 kubelet[2882]: E0128 06:19:17.402939 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:19:17.403859 kubelet[2882]: E0128 06:19:17.403447 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:17.405821 kubelet[2882]: E0128 06:19:17.405554 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:19:17.543847 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:17.544789 kernel: audit: type=1130 audit(1769581157.498:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:49650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:17.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:49650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:17.499854 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:49650.service - OpenSSH per-connection server daemon (10.0.0.1:49650). Jan 28 06:19:17.760000 audit[5377]: USER_ACCT pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.766940 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 49650 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:17.768870 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:17.783820 systemd-logind[1580]: New session 13 of user core. Jan 28 06:19:17.796942 kernel: audit: type=1101 audit(1769581157.760:782): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.764000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.851777 kernel: audit: type=1103 audit(1769581157.764:783): pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.851915 kernel: audit: type=1006 audit(1769581157.764:784): pid=5377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 06:19:17.764000 audit[5377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe938a66a0 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:17.871789 kernel: audit: type=1300 audit(1769581157.764:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe938a66a0 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:17.873865 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 06:19:17.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:17.939731 kernel: audit: type=1327 audit(1769581157.764:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:17.941753 kernel: audit: type=1105 audit(1769581157.878:785): pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.878000 audit[5377]: USER_START pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.882000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:17.999568 kernel: audit: type=1103 audit(1769581157.882:786): pid=5381 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:18.278096 containerd[1608]: time="2026-01-28T06:19:18.277965924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:19:18.305069 sshd[5381]: Connection closed by 10.0.0.1 port 49650 Jan 28 06:19:18.306118 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:18.312000 audit[5377]: USER_END pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:18.319803 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:49650.service: Deactivated successfully. Jan 28 06:19:18.324933 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 06:19:18.345720 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 28 06:19:18.348580 systemd-logind[1580]: Removed session 13. Jan 28 06:19:18.366663 kernel: audit: type=1106 audit(1769581158.312:787): pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:18.313000 audit[5377]: CRED_DISP pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:18.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:49650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:18.394761 kernel: audit: type=1104 audit(1769581158.313:788): pid=5377 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:18.413056 containerd[1608]: time="2026-01-28T06:19:18.412096299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:18.416728 containerd[1608]: time="2026-01-28T06:19:18.416117063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:19:18.416728 containerd[1608]: time="2026-01-28T06:19:18.416680947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:18.419653 kubelet[2882]: E0128 06:19:18.417799 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:19:18.419653 kubelet[2882]: E0128 06:19:18.418087 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:19:18.419653 kubelet[2882]: E0128 06:19:18.418853 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqvxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:18.436110 kubelet[2882]: E0128 06:19:18.420636 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:19:19.284714 kubelet[2882]: E0128 06:19:19.283590 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:19:23.336845 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Jan 28 06:19:23.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:49656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:23.344637 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:23.344712 kernel: audit: type=1130 audit(1769581163.335:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:49656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:23.518000 audit[5420]: USER_ACCT pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.521186 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:23.524723 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:23.539184 systemd-logind[1580]: New session 14 of user core. Jan 28 06:19:23.521000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.589179 kernel: audit: type=1101 audit(1769581163.518:791): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.589652 kernel: audit: type=1103 audit(1769581163.521:792): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.589688 kernel: audit: type=1006 audit(1769581163.521:793): pid=5420 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 06:19:23.623106 kernel: audit: type=1300 audit(1769581163.521:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad04e5e0 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:23.521000 audit[5420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad04e5e0 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:23.650744 kernel: audit: type=1327 audit(1769581163.521:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:23.521000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:23.663540 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 06:19:23.669000 audit[5420]: USER_START pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.675000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.746739 kernel: audit: type=1105 audit(1769581163.669:794): pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.748885 kernel: audit: type=1103 audit(1769581163.675:795): pid=5424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.899819 sshd[5424]: Connection closed by 10.0.0.1 port 49656 Jan 28 06:19:23.901562 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:23.905000 audit[5420]: USER_END pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.913654 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:49656.service: Deactivated successfully. Jan 28 06:19:23.919641 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 06:19:23.923100 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 28 06:19:23.927597 systemd-logind[1580]: Removed session 14. Jan 28 06:19:23.905000 audit[5420]: CRED_DISP pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.977776 kernel: audit: type=1106 audit(1769581163.905:796): pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.977864 kernel: audit: type=1104 audit(1769581163.905:797): pid=5420 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:23.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:49656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:24.281561 kubelet[2882]: E0128 06:19:24.280029 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:19:24.281561 kubelet[2882]: E0128 06:19:24.280570 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:19:25.290517 kubelet[2882]: E0128 06:19:25.288943 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:19:28.277175 kubelet[2882]: E0128 06:19:28.276543 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:19:28.925008 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:39704.service - OpenSSH per-connection server daemon (10.0.0.1:39704). Jan 28 06:19:28.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:39704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:28.932620 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:28.932664 kernel: audit: type=1130 audit(1769581168.923:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:39704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:29.061000 audit[5438]: USER_ACCT pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.064104 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 39704 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:29.068577 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:29.083765 systemd-logind[1580]: New session 15 of user core. Jan 28 06:19:29.062000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.132905 kernel: audit: type=1101 audit(1769581169.061:800): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.132994 kernel: audit: type=1103 audit(1769581169.062:801): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.133043 kernel: audit: type=1006 audit(1769581169.062:802): pid=5438 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 28 06:19:29.134887 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 06:19:29.062000 audit[5438]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc021abf0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:29.188892 kernel: audit: type=1300 audit(1769581169.062:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc021abf0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:29.189155 kernel: audit: type=1327 audit(1769581169.062:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:29.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:29.137000 audit[5438]: USER_START pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.245450 kernel: audit: type=1105 audit(1769581169.137:803): pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.245565 kernel: audit: type=1103 audit(1769581169.142:804): pid=5442 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.142000 audit[5442]: CRED_ACQ pid=5442 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.350848 sshd[5442]: Connection closed by 10.0.0.1 port 39704 Jan 28 06:19:29.350894 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:29.353000 audit[5438]: USER_END pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.354000 audit[5438]: CRED_DISP pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.423608 kernel: audit: type=1106 audit(1769581169.353:805): pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.423715 kernel: audit: type=1104 audit(1769581169.354:806): pid=5438 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.431966 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:39704.service: Deactivated successfully. Jan 28 06:19:29.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:39704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:29.435955 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 06:19:29.439531 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 28 06:19:29.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.25:22-10.0.0.1:39718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:29.445643 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:39718.service - OpenSSH per-connection server daemon (10.0.0.1:39718). Jan 28 06:19:29.449774 systemd-logind[1580]: Removed session 15. Jan 28 06:19:29.581000 audit[5456]: USER_ACCT pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.583722 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 39718 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:29.584000 audit[5456]: CRED_ACQ pid=5456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.584000 audit[5456]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7115cfb0 a2=3 a3=0 items=0 ppid=1 pid=5456 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:29.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:29.587115 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:29.601083 systemd-logind[1580]: New session 16 of user core. Jan 28 06:19:29.618822 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 06:19:29.639000 audit[5456]: USER_START pid=5456 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.643000 audit[5460]: CRED_ACQ pid=5460 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.911563 sshd[5460]: Connection closed by 10.0.0.1 port 39718 Jan 28 06:19:29.912135 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:29.913000 audit[5456]: USER_END pid=5456 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.913000 audit[5456]: CRED_DISP pid=5456 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:29.931933 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:39718.service: Deactivated successfully. Jan 28 06:19:29.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.25:22-10.0.0.1:39718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:29.939858 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 06:19:29.945025 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 28 06:19:29.947812 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:39726.service - OpenSSH per-connection server daemon (10.0.0.1:39726). Jan 28 06:19:29.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.25:22-10.0.0.1:39726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:29.957910 systemd-logind[1580]: Removed session 16. Jan 28 06:19:30.060000 audit[5471]: USER_ACCT pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.062579 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 39726 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:30.063000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.064000 audit[5471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4317ffa0 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:30.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:30.067503 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:30.080982 systemd-logind[1580]: New session 17 of user core. Jan 28 06:19:30.088877 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 06:19:30.094000 audit[5471]: USER_START pid=5471 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.098000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.286022 sshd[5475]: Connection closed by 10.0.0.1 port 39726 Jan 28 06:19:30.286585 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:30.288000 audit[5471]: USER_END pid=5471 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.289000 audit[5471]: CRED_DISP pid=5471 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:30.296667 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:39726.service: Deactivated successfully. Jan 28 06:19:30.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.25:22-10.0.0.1:39726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:30.301752 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 06:19:30.305916 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 28 06:19:30.312820 systemd-logind[1580]: Removed session 17. Jan 28 06:19:31.275496 kubelet[2882]: E0128 06:19:31.274877 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:32.276147 kubelet[2882]: E0128 06:19:32.275910 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:19:32.276147 kubelet[2882]: E0128 06:19:32.276091 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:19:35.275492 kubelet[2882]: E0128 06:19:35.275132 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:35.279035 kubelet[2882]: E0128 06:19:35.278929 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:19:35.304644 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:48226.service - OpenSSH per-connection server daemon (10.0.0.1:48226). Jan 28 06:19:35.328956 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 06:19:35.328995 kernel: audit: type=1130 audit(1769581175.303:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.25:22-10.0.0.1:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:35.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.25:22-10.0.0.1:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:35.460000 audit[5488]: USER_ACCT pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.463153 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 48226 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:35.466614 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:35.492745 kernel: audit: type=1101 audit(1769581175.460:827): pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.492851 kernel: audit: type=1103 audit(1769581175.462:828): pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.462000 audit[5488]: CRED_ACQ pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.496538 systemd-logind[1580]: New session 18 of user core. Jan 28 06:19:35.548876 kernel: audit: type=1006 audit(1769581175.462:829): pid=5488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 06:19:35.462000 audit[5488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd56600a0 a2=3 a3=0 items=0 ppid=1 pid=5488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:35.589638 kernel: audit: type=1300 audit(1769581175.462:829): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd56600a0 a2=3 a3=0 items=0 ppid=1 pid=5488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:35.462000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:35.594814 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 06:19:35.601000 audit[5488]: USER_START pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.646583 kernel: audit: type=1327 audit(1769581175.462:829): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:35.646669 kernel: audit: type=1105 audit(1769581175.601:830): pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.646732 kernel: audit: type=1103 audit(1769581175.606:831): pid=5492 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.606000 audit[5492]: CRED_ACQ pid=5492 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.871537 sshd[5492]: Connection closed by 10.0.0.1 port 48226 Jan 28 06:19:35.874060 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:35.874000 audit[5488]: USER_END pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.880077 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 28 06:19:35.883042 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:48226.service: Deactivated successfully. Jan 28 06:19:35.887990 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 06:19:35.890954 systemd-logind[1580]: Removed session 18. Jan 28 06:19:35.875000 audit[5488]: CRED_DISP pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.947633 kernel: audit: type=1106 audit(1769581175.874:832): pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.947724 kernel: audit: type=1104 audit(1769581175.875:833): pid=5488 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:35.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.25:22-10.0.0.1:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:39.277795 kubelet[2882]: E0128 06:19:39.277741 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:19:39.280953 kubelet[2882]: E0128 06:19:39.279940 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:19:40.274137 kubelet[2882]: E0128 06:19:40.273990 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:40.894887 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:48230.service - OpenSSH per-connection server daemon (10.0.0.1:48230). Jan 28 06:19:40.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:40.902030 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:40.902082 kernel: audit: type=1130 audit(1769581180.893:835): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:41.103000 audit[5506]: USER_ACCT pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.105824 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 48230 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:41.110000 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:41.133795 systemd-logind[1580]: New session 19 of user core. Jan 28 06:19:41.153097 kernel: audit: type=1101 audit(1769581181.103:836): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.153164 kernel: audit: type=1103 audit(1769581181.106:837): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.106000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.210673 kernel: audit: type=1006 audit(1769581181.106:838): pid=5506 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 06:19:41.106000 audit[5506]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc41c27680 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:41.254639 kernel: audit: type=1300 audit(1769581181.106:838): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc41c27680 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:41.106000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:41.267812 kernel: audit: type=1327 audit(1769581181.106:838): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:41.273155 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 06:19:41.282151 kubelet[2882]: E0128 06:19:41.280121 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:41.285000 audit[5506]: USER_START pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.336690 kernel: audit: type=1105 audit(1769581181.285:839): pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.289000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.368611 kernel: audit: type=1103 audit(1769581181.289:840): pid=5510 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.611844 sshd[5510]: Connection closed by 10.0.0.1 port 48230 Jan 28 06:19:41.614841 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:41.616000 audit[5506]: USER_END pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.622963 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:48230.service: Deactivated successfully. Jan 28 06:19:41.627165 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 06:19:41.630600 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 28 06:19:41.632170 systemd-logind[1580]: Removed session 19. Jan 28 06:19:41.659717 kernel: audit: type=1106 audit(1769581181.616:841): pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.616000 audit[5506]: CRED_DISP pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.690639 kernel: audit: type=1104 audit(1769581181.616:842): pid=5506 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:41.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:48230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:42.280585 kubelet[2882]: E0128 06:19:42.280137 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:19:46.643070 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:38472.service - OpenSSH per-connection server daemon (10.0.0.1:38472). Jan 28 06:19:46.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:38472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:46.650801 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:46.650895 kernel: audit: type=1130 audit(1769581186.642:844): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:38472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:46.866000 audit[5525]: USER_ACCT pid=5525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.869685 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 38472 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:46.873849 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:46.895939 systemd-logind[1580]: New session 20 of user core. Jan 28 06:19:46.870000 audit[5525]: CRED_ACQ pid=5525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.937810 kernel: audit: type=1101 audit(1769581186.866:845): pid=5525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.938533 kernel: audit: type=1103 audit(1769581186.870:846): pid=5525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.938592 kernel: audit: type=1006 audit(1769581186.870:847): pid=5525 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 06:19:46.870000 audit[5525]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe07eebb30 a2=3 a3=0 items=0 ppid=1 pid=5525 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:46.958784 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 06:19:46.990461 kernel: audit: type=1300 audit(1769581186.870:847): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe07eebb30 a2=3 a3=0 items=0 ppid=1 pid=5525 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:46.990564 kernel: audit: type=1327 audit(1769581186.870:847): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:46.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:47.003561 kernel: audit: type=1105 audit(1769581186.965:848): pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.965000 audit[5525]: USER_START pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.045503 kernel: audit: type=1103 audit(1769581186.970:849): pid=5529 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:46.970000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.232791 sshd[5529]: Connection closed by 10.0.0.1 port 38472 Jan 28 06:19:47.233091 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:47.238000 audit[5525]: USER_END pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.246828 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jan 28 06:19:47.246991 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:38472.service: Deactivated successfully. Jan 28 06:19:47.252886 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 06:19:47.257175 systemd-logind[1580]: Removed session 20. Jan 28 06:19:47.285171 kubelet[2882]: E0128 06:19:47.284694 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:19:47.286558 kernel: audit: type=1106 audit(1769581187.238:850): pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.286727 kernel: audit: type=1104 audit(1769581187.238:851): pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.238000 audit[5525]: CRED_DISP pid=5525 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:47.286832 containerd[1608]: time="2026-01-28T06:19:47.285009714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 06:19:47.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:38472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:47.399947 containerd[1608]: time="2026-01-28T06:19:47.399656624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:19:47.409554 containerd[1608]: time="2026-01-28T06:19:47.408869612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 06:19:47.409554 containerd[1608]: time="2026-01-28T06:19:47.408966712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 06:19:47.409703 kubelet[2882]: E0128 06:19:47.409109 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:19:47.409703 kubelet[2882]: E0128 06:19:47.409173 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 06:19:47.411748 kubelet[2882]: E0128 06:19:47.411514 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7765448cd-4smp8_calico-system(17337418-3675-4e8a-a365-9d0165d3a261): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 06:19:47.414715 kubelet[2882]: E0128 06:19:47.414580 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:19:50.283866 kubelet[2882]: E0128 06:19:50.282158 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:19:50.283866 kubelet[2882]: E0128 06:19:50.283173 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:19:52.258836 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:38480.service - OpenSSH per-connection server daemon (10.0.0.1:38480). Jan 28 06:19:52.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.25:22-10.0.0.1:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:52.267931 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:52.268052 kernel: audit: type=1130 audit(1769581192.258:853): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.25:22-10.0.0.1:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:52.277060 kubelet[2882]: E0128 06:19:52.276180 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:19:52.496000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.497910 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 38480 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:52.504097 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:52.535707 kernel: audit: type=1101 audit(1769581192.496:854): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.501000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.562935 systemd-logind[1580]: New session 21 of user core. Jan 28 06:19:52.577663 kernel: audit: type=1103 audit(1769581192.501:855): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.501000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7ee606d0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:52.645534 kernel: audit: type=1006 audit(1769581192.501:856): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 06:19:52.645670 kernel: audit: type=1300 audit(1769581192.501:856): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7ee606d0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:52.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:52.658974 kernel: audit: type=1327 audit(1769581192.501:856): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:52.660964 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 06:19:52.671000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.722571 kernel: audit: type=1105 audit(1769581192.671:857): pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.676000 audit[5577]: CRED_ACQ pid=5577 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.766686 kernel: audit: type=1103 audit(1769581192.676:858): pid=5577 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.938061 sshd[5577]: Connection closed by 10.0.0.1 port 38480 Jan 28 06:19:52.940187 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:52.944000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.952766 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:38480.service: Deactivated successfully. Jan 28 06:19:52.957654 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 06:19:52.959669 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Jan 28 06:19:52.962596 systemd-logind[1580]: Removed session 21. Jan 28 06:19:52.945000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.992597 kernel: audit: type=1106 audit(1769581192.944:859): pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.992670 kernel: audit: type=1104 audit(1769581192.945:860): pid=5549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:52.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.25:22-10.0.0.1:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:57.273922 kubelet[2882]: E0128 06:19:57.273790 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:19:57.281945 kubelet[2882]: E0128 06:19:57.281741 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:19:57.963565 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:54596.service - OpenSSH per-connection server daemon (10.0.0.1:54596). Jan 28 06:19:57.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.25:22-10.0.0.1:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:57.975408 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:19:57.975472 kernel: audit: type=1130 audit(1769581197.962:862): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.25:22-10.0.0.1:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:58.139000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.140857 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 54596 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:19:58.144920 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:19:58.156166 systemd-logind[1580]: New session 22 of user core. Jan 28 06:19:58.142000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.212570 kernel: audit: type=1101 audit(1769581198.139:863): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.212636 kernel: audit: type=1103 audit(1769581198.142:864): pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.212660 kernel: audit: type=1006 audit(1769581198.143:865): pid=5599 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 06:19:58.143000 audit[5599]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbba1bf30 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:58.276510 kernel: audit: type=1300 audit(1769581198.143:865): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbba1bf30 a2=3 a3=0 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:19:58.276606 kernel: audit: type=1327 audit(1769581198.143:865): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:58.143000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:19:58.276665 kubelet[2882]: E0128 06:19:58.276060 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:19:58.295168 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 06:19:58.304000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.351656 kernel: audit: type=1105 audit(1769581198.304:866): pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.351776 kernel: audit: type=1103 audit(1769581198.309:867): pid=5604 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.309000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.566035 sshd[5604]: Connection closed by 10.0.0.1 port 54596 Jan 28 06:19:58.566600 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jan 28 06:19:58.569000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.576554 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:54596.service: Deactivated successfully. Jan 28 06:19:58.580993 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 06:19:58.584986 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Jan 28 06:19:58.587859 systemd-logind[1580]: Removed session 22. Jan 28 06:19:58.615689 kernel: audit: type=1106 audit(1769581198.569:868): pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.615765 kernel: audit: type=1104 audit(1769581198.569:869): pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.569000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:19:58.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.25:22-10.0.0.1:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:19:59.278077 kubelet[2882]: E0128 06:19:59.277779 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:20:03.277175 kubelet[2882]: E0128 06:20:03.277025 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:20:03.281866 containerd[1608]: time="2026-01-28T06:20:03.281683005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 06:20:03.393015 containerd[1608]: time="2026-01-28T06:20:03.392862158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:03.396053 containerd[1608]: time="2026-01-28T06:20:03.395826086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 06:20:03.396053 containerd[1608]: time="2026-01-28T06:20:03.396014916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:03.397120 kubelet[2882]: E0128 06:20:03.396537 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:20:03.397120 kubelet[2882]: E0128 06:20:03.396602 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 06:20:03.397120 kubelet[2882]: E0128 06:20:03.396905 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:03.404075 containerd[1608]: time="2026-01-28T06:20:03.403726323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 06:20:03.489714 containerd[1608]: time="2026-01-28T06:20:03.489070409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:03.492686 containerd[1608]: time="2026-01-28T06:20:03.492647172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 06:20:03.492831 containerd[1608]: time="2026-01-28T06:20:03.492811808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:03.494125 kubelet[2882]: E0128 06:20:03.492966 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:20:03.494125 kubelet[2882]: E0128 06:20:03.493703 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 06:20:03.494125 kubelet[2882]: E0128 06:20:03.493822 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h28vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwxrt_calico-system(8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:03.495802 kubelet[2882]: E0128 06:20:03.495619 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:20:03.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.25:22-10.0.0.1:54598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:03.601908 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:20:03.602075 kernel: audit: type=1130 audit(1769581203.594:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.25:22-10.0.0.1:54598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:03.593815 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:54598.service - OpenSSH per-connection server daemon (10.0.0.1:54598). Jan 28 06:20:03.749000 audit[5624]: USER_ACCT pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.751616 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 54598 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:03.755064 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:03.766830 systemd-logind[1580]: New session 23 of user core. Jan 28 06:20:03.785749 kernel: audit: type=1101 audit(1769581203.749:872): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.785844 kernel: audit: type=1103 audit(1769581203.750:873): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.750000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.843546 kernel: audit: type=1006 audit(1769581203.750:874): pid=5624 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 28 06:20:03.843665 kernel: audit: type=1300 audit(1769581203.750:874): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3af1b100 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:03.750000 audit[5624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3af1b100 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:03.750000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:03.898581 kernel: audit: type=1327 audit(1769581203.750:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:03.901770 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 06:20:03.907000 audit[5624]: USER_START pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.949724 kernel: audit: type=1105 audit(1769581203.907:875): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.949849 kernel: audit: type=1103 audit(1769581203.912:876): pid=5628 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:03.912000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:04.174934 sshd[5628]: Connection closed by 10.0.0.1 port 54598 Jan 28 06:20:04.175741 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:04.180000 audit[5624]: USER_END pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:04.185728 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:54598.service: Deactivated successfully. Jan 28 06:20:04.190593 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 06:20:04.193846 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Jan 28 06:20:04.196838 systemd-logind[1580]: Removed session 23. Jan 28 06:20:04.180000 audit[5624]: CRED_DISP pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:04.254877 kernel: audit: type=1106 audit(1769581204.180:877): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:04.254991 kernel: audit: type=1104 audit(1769581204.180:878): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:04.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.25:22-10.0.0.1:54598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:05.282022 containerd[1608]: time="2026-01-28T06:20:05.280060350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 06:20:05.373513 containerd[1608]: time="2026-01-28T06:20:05.372692192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:05.376534 containerd[1608]: time="2026-01-28T06:20:05.375977540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 06:20:05.376534 containerd[1608]: time="2026-01-28T06:20:05.376156655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:05.376654 kubelet[2882]: E0128 06:20:05.376608 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:20:05.377056 kubelet[2882]: E0128 06:20:05.376662 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 06:20:05.377056 kubelet[2882]: E0128 06:20:05.376788 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b0ef5b7dffa04fdda9d5c4de471b2ce8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:05.383146 containerd[1608]: time="2026-01-28T06:20:05.383012096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 06:20:05.469803 containerd[1608]: time="2026-01-28T06:20:05.469748186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:05.473820 containerd[1608]: time="2026-01-28T06:20:05.473621482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 06:20:05.473820 containerd[1608]: time="2026-01-28T06:20:05.473716659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:05.474051 kubelet[2882]: E0128 06:20:05.473936 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:20:05.474051 kubelet[2882]: E0128 06:20:05.473989 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 06:20:05.474123 kubelet[2882]: E0128 06:20:05.474095 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69cbcd96f4-7mm4t_calico-system(c3984d8a-f7f9-40c1-99bb-6a83cd256ee4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:05.476622 kubelet[2882]: E0128 06:20:05.475634 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:20:07.277854 containerd[1608]: time="2026-01-28T06:20:07.277534087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 06:20:07.372911 containerd[1608]: time="2026-01-28T06:20:07.372652982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:07.375182 containerd[1608]: time="2026-01-28T06:20:07.375132180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 06:20:07.376082 containerd[1608]: time="2026-01-28T06:20:07.375631207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:07.376152 kubelet[2882]: E0128 06:20:07.375924 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:20:07.376152 kubelet[2882]: E0128 06:20:07.375976 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 06:20:07.376152 kubelet[2882]: E0128 06:20:07.376095 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7lvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hpjqb_calico-system(32744aca-35af-454a-aa76-f78a1f5cf3bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:07.377997 kubelet[2882]: E0128 06:20:07.377935 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:20:09.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.25:22-10.0.0.1:52322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:09.201730 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:52322.service - OpenSSH per-connection server daemon (10.0.0.1:52322). Jan 28 06:20:09.210125 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:20:09.210174 kernel: audit: type=1130 audit(1769581209.200:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.25:22-10.0.0.1:52322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:09.301734 containerd[1608]: time="2026-01-28T06:20:09.298813858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:20:09.365000 audit[5656]: USER_ACCT pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.368591 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 52322 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:09.375153 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:09.397612 systemd-logind[1580]: New session 24 of user core. Jan 28 06:20:09.406643 kernel: audit: type=1101 audit(1769581209.365:881): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.406696 kernel: audit: type=1103 audit(1769581209.371:882): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.371000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.406796 containerd[1608]: time="2026-01-28T06:20:09.402791959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:09.443102 kernel: audit: type=1006 audit(1769581209.371:883): pid=5656 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 06:20:09.443525 kubelet[2882]: E0128 06:20:09.442986 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:20:09.443525 kubelet[2882]: E0128 06:20:09.443046 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:20:09.443895 containerd[1608]: time="2026-01-28T06:20:09.440942859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:20:09.443895 containerd[1608]: time="2026-01-28T06:20:09.441022306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:09.444758 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 06:20:09.451563 kubelet[2882]: E0128 06:20:09.445640 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqvxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-cgc65_calico-apiserver(cfce9196-a7e4-4009-b111-fc598ada449a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:09.451563 kubelet[2882]: E0128 06:20:09.446862 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:20:09.371000 audit[5656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5761a4d0 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:09.501006 kernel: audit: type=1300 audit(1769581209.371:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5761a4d0 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:09.371000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:09.455000 audit[5656]: USER_START pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.567578 kernel: audit: type=1327 audit(1769581209.371:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:09.567679 kernel: audit: type=1105 audit(1769581209.455:884): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.567731 kernel: audit: type=1103 audit(1769581209.460:885): pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.460000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.694884 sshd[5660]: Connection closed by 10.0.0.1 port 52322 Jan 28 06:20:09.695926 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:09.701000 audit[5656]: USER_END pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.701000 audit[5656]: CRED_DISP pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.754678 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:52322.service: Deactivated successfully. Jan 28 06:20:09.762764 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 06:20:09.767850 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Jan 28 06:20:09.772710 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:52336.service - OpenSSH per-connection server daemon (10.0.0.1:52336). Jan 28 06:20:09.775562 kernel: audit: type=1106 audit(1769581209.701:886): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.775605 kernel: audit: type=1104 audit(1769581209.701:887): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.25:22-10.0.0.1:52322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:09.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.25:22-10.0.0.1:52336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:09.783548 systemd-logind[1580]: Removed session 24. Jan 28 06:20:09.891000 audit[5674]: USER_ACCT pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.891833 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 52336 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:09.893000 audit[5674]: CRED_ACQ pid=5674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.893000 audit[5674]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce69181b0 a2=3 a3=0 items=0 ppid=1 pid=5674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:09.893000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:09.895867 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:09.909005 systemd-logind[1580]: New session 25 of user core. Jan 28 06:20:09.915777 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 06:20:09.924000 audit[5674]: USER_START pid=5674 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:09.928000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:10.766071 sshd[5679]: Connection closed by 10.0.0.1 port 52336 Jan 28 06:20:10.766865 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:10.771000 audit[5674]: USER_END pid=5674 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:10.772000 audit[5674]: CRED_DISP pid=5674 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:10.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.25:22-10.0.0.1:52344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:10.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.25:22-10.0.0.1:52336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:10.788577 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Jan 28 06:20:10.789561 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:52336.service: Deactivated successfully. Jan 28 06:20:10.794748 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 06:20:10.808894 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Jan 28 06:20:10.811758 systemd-logind[1580]: Removed session 25. Jan 28 06:20:10.995000 audit[5689]: USER_ACCT pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:11.000119 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:11.000000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:11.000000 audit[5689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff51e4ad90 a2=3 a3=0 items=0 ppid=1 pid=5689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:11.000000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:11.002752 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:11.020936 systemd-logind[1580]: New session 26 of user core. Jan 28 06:20:11.029776 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 06:20:11.041000 audit[5689]: USER_START pid=5689 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:11.047000 audit[5703]: CRED_ACQ pid=5703 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.287781 kubelet[2882]: E0128 06:20:12.286044 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:20:12.290934 containerd[1608]: time="2026-01-28T06:20:12.286724282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 06:20:12.332951 sshd[5703]: Connection closed by 10.0.0.1 port 52344 Jan 28 06:20:12.333137 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:12.340000 audit[5689]: USER_END pid=5689 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.340000 audit[5689]: CRED_DISP pid=5689 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.329000 audit[5720]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:12.329000 audit[5720]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeb9a7a490 a2=0 a3=7ffeb9a7a47c items=0 ppid=3039 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:12.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:12.360180 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:52344.service: Deactivated successfully. Jan 28 06:20:12.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.25:22-10.0.0.1:52344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:12.351000 audit[5720]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:12.351000 audit[5720]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb9a7a490 a2=0 a3=0 items=0 ppid=3039 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:12.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:12.371581 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 06:20:12.373735 systemd[1]: session-26.scope: Consumed 1.027s CPU time, 38.1M memory peak. Jan 28 06:20:12.379931 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Jan 28 06:20:12.389981 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:52358.service - OpenSSH per-connection server daemon (10.0.0.1:52358). Jan 28 06:20:12.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.25:22-10.0.0.1:52358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:12.396123 systemd-logind[1580]: Removed session 26. Jan 28 06:20:12.408620 containerd[1608]: time="2026-01-28T06:20:12.407141986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 06:20:12.425111 containerd[1608]: time="2026-01-28T06:20:12.423734739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 06:20:12.425111 containerd[1608]: time="2026-01-28T06:20:12.423817061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 06:20:12.428704 kubelet[2882]: E0128 06:20:12.427629 2882 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:20:12.428704 kubelet[2882]: E0128 06:20:12.427681 2882 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 06:20:12.428704 kubelet[2882]: E0128 06:20:12.427800 2882 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckxvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f7d9cfbc8-r2r59_calico-apiserver(c5caa60c-0db4-4584-8ceb-7bfff587bf9e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 06:20:12.429597 kubelet[2882]: E0128 06:20:12.429103 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:20:12.553000 audit[5729]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:12.553000 audit[5729]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe51b6bfa0 a2=0 a3=7ffe51b6bf8c items=0 ppid=3039 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:12.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:12.562000 audit[5729]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:12.562000 audit[5729]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe51b6bfa0 a2=0 a3=0 items=0 ppid=3039 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:12.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:12.590000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.591796 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 52358 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:12.592000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.592000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddf3fc8c0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:12.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:12.596688 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:12.615573 systemd-logind[1580]: New session 27 of user core. Jan 28 06:20:12.624899 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 06:20:12.635000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:12.640000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.271096 sshd[5733]: Connection closed by 10.0.0.1 port 52358 Jan 28 06:20:13.279768 kubelet[2882]: E0128 06:20:13.279525 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:20:13.282652 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:13.295971 systemd[1]: Started sshd@26-10.0.0.25:22-10.0.0.1:52368.service - OpenSSH per-connection server daemon (10.0.0.1:52368). Jan 28 06:20:13.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.25:22-10.0.0.1:52368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:13.301000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.301000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.316582 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:52358.service: Deactivated successfully. Jan 28 06:20:13.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.25:22-10.0.0.1:52358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:13.327009 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 06:20:13.330895 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Jan 28 06:20:13.335837 systemd-logind[1580]: Removed session 27. Jan 28 06:20:13.437000 audit[5744]: USER_ACCT pid=5744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.438931 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 52368 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:13.441000 audit[5744]: CRED_ACQ pid=5744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.441000 audit[5744]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff552098c0 a2=3 a3=0 items=0 ppid=1 pid=5744 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:13.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:13.444724 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:13.465179 systemd-logind[1580]: New session 28 of user core. Jan 28 06:20:13.482169 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 06:20:13.492000 audit[5744]: USER_START pid=5744 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.498000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.743831 sshd[5751]: Connection closed by 10.0.0.1 port 52368 Jan 28 06:20:13.744594 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:13.749000 audit[5744]: USER_END pid=5744 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.750000 audit[5744]: CRED_DISP pid=5744 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:13.756841 systemd[1]: sshd@26-10.0.0.25:22-10.0.0.1:52368.service: Deactivated successfully. Jan 28 06:20:13.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.25:22-10.0.0.1:52368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:13.762966 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 06:20:13.767063 systemd-logind[1580]: Session 28 logged out. Waiting for processes to exit. Jan 28 06:20:13.771041 systemd-logind[1580]: Removed session 28. Jan 28 06:20:15.277939 kubelet[2882]: E0128 06:20:15.277039 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:20:16.279182 kubelet[2882]: E0128 06:20:16.278926 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:20:18.277728 kubelet[2882]: E0128 06:20:18.277590 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:20:18.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.25:22-10.0.0.1:37936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:18.767772 systemd[1]: Started sshd@27-10.0.0.25:22-10.0.0.1:37936.service - OpenSSH per-connection server daemon (10.0.0.1:37936). Jan 28 06:20:18.776850 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 28 06:20:18.776921 kernel: audit: type=1130 audit(1769581218.767:929): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.25:22-10.0.0.1:37936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:18.958000 audit[5766]: USER_ACCT pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:18.960791 sshd[5766]: Accepted publickey for core from 10.0.0.1 port 37936 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:18.964890 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:18.984028 systemd-logind[1580]: New session 29 of user core. Jan 28 06:20:18.961000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.047768 kernel: audit: type=1101 audit(1769581218.958:930): pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.047900 kernel: audit: type=1103 audit(1769581218.961:931): pid=5766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.070752 kernel: audit: type=1006 audit(1769581218.961:932): pid=5766 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 06:20:19.071681 kernel: audit: type=1300 audit(1769581218.961:932): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc855e5d00 a2=3 a3=0 items=0 ppid=1 pid=5766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:18.961000 audit[5766]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc855e5d00 a2=3 a3=0 items=0 ppid=1 pid=5766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:19.109795 kernel: audit: type=1327 audit(1769581218.961:932): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:18.961000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:19.130856 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 06:20:19.139000 audit[5766]: USER_START pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.183570 kernel: audit: type=1105 audit(1769581219.139:933): pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.183736 kernel: audit: type=1103 audit(1769581219.144:934): pid=5770 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.144000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.476882 sshd[5770]: Connection closed by 10.0.0.1 port 37936 Jan 28 06:20:19.477880 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:19.481000 audit[5766]: USER_END pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.487768 systemd[1]: sshd@27-10.0.0.25:22-10.0.0.1:37936.service: Deactivated successfully. Jan 28 06:20:19.492898 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 06:20:19.499963 systemd-logind[1580]: Session 29 logged out. Waiting for processes to exit. Jan 28 06:20:19.502864 systemd-logind[1580]: Removed session 29. Jan 28 06:20:19.482000 audit[5766]: CRED_DISP pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.565669 kernel: audit: type=1106 audit(1769581219.481:935): pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.565787 kernel: audit: type=1104 audit(1769581219.482:936): pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:19.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.25:22-10.0.0.1:37936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:23.302781 kubelet[2882]: E0128 06:20:23.300697 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:20:24.279976 kubelet[2882]: E0128 06:20:24.279074 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:20:24.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:59692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:24.505711 systemd[1]: Started sshd@28-10.0.0.25:22-10.0.0.1:59692.service - OpenSSH per-connection server daemon (10.0.0.1:59692). Jan 28 06:20:24.521632 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:20:24.521962 kernel: audit: type=1130 audit(1769581224.505:938): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:59692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:24.693783 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 59692 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:24.693000 audit[5811]: USER_ACCT pid=5811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.728943 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:24.725000 audit[5811]: CRED_ACQ pid=5811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.742703 systemd-logind[1580]: New session 30 of user core. Jan 28 06:20:24.764649 kernel: audit: type=1101 audit(1769581224.693:939): pid=5811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.764734 kernel: audit: type=1103 audit(1769581224.725:940): pid=5811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.764777 kernel: audit: type=1006 audit(1769581224.725:941): pid=5811 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 28 06:20:24.783840 kernel: audit: type=1300 audit(1769581224.725:941): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc549b4dd0 a2=3 a3=0 items=0 ppid=1 pid=5811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:24.725000 audit[5811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc549b4dd0 a2=3 a3=0 items=0 ppid=1 pid=5811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:24.821643 kernel: audit: type=1327 audit(1769581224.725:941): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:24.725000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:24.844847 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 06:20:24.857000 audit[5811]: USER_START pid=5811 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.863000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.948715 kernel: audit: type=1105 audit(1769581224.857:942): pid=5811 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:24.948835 kernel: audit: type=1103 audit(1769581224.863:943): pid=5815 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:25.092000 audit[5825]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:25.092000 audit[5825]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb60958e0 a2=0 a3=7fffb60958cc items=0 ppid=3039 pid=5825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:25.175614 kernel: audit: type=1325 audit(1769581225.092:944): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:25.175738 kernel: audit: type=1300 audit(1769581225.092:944): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb60958e0 a2=0 a3=7fffb60958cc items=0 ppid=3039 pid=5825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:25.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:25.187000 audit[5825]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 06:20:25.187000 audit[5825]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffb60958e0 a2=0 a3=7fffb60958cc items=0 ppid=3039 pid=5825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:25.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 06:20:25.228021 sshd[5815]: Connection closed by 10.0.0.1 port 59692 Jan 28 06:20:25.242722 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:25.251000 audit[5811]: USER_END pid=5811 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:25.251000 audit[5811]: CRED_DISP pid=5811 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:25.265914 systemd[1]: sshd@28-10.0.0.25:22-10.0.0.1:59692.service: Deactivated successfully. Jan 28 06:20:25.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:59692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:25.273781 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 06:20:25.278895 systemd-logind[1580]: Session 30 logged out. Waiting for processes to exit. Jan 28 06:20:25.284676 systemd-logind[1580]: Removed session 30. Jan 28 06:20:27.277695 kubelet[2882]: E0128 06:20:27.277627 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:20:28.283157 kubelet[2882]: E0128 06:20:28.283091 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:20:30.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:59702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:30.250143 systemd[1]: Started sshd@29-10.0.0.25:22-10.0.0.1:59702.service - OpenSSH per-connection server daemon (10.0.0.1:59702). Jan 28 06:20:30.260714 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 06:20:30.260801 kernel: audit: type=1130 audit(1769581230.250:949): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:59702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:30.309756 kubelet[2882]: E0128 06:20:30.309694 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwxrt" podUID="8aa84ac8-30ec-47f2-bd77-dd7e1d57eb66" Jan 28 06:20:30.577065 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 59702 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:30.576000 audit[5830]: USER_ACCT pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.588907 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:30.598124 systemd-logind[1580]: New session 31 of user core. Jan 28 06:20:30.586000 audit[5830]: CRED_ACQ pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.653783 kernel: audit: type=1101 audit(1769581230.576:950): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.653938 kernel: audit: type=1103 audit(1769581230.586:951): pid=5830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.653987 kernel: audit: type=1006 audit(1769581230.586:952): pid=5830 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 28 06:20:30.586000 audit[5830]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ed938a0 a2=3 a3=0 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:30.717892 kernel: audit: type=1300 audit(1769581230.586:952): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ed938a0 a2=3 a3=0 items=0 ppid=1 pid=5830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:30.718918 kernel: audit: type=1327 audit(1769581230.586:952): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:30.586000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:30.720738 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 06:20:30.730000 audit[5830]: USER_START pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.789608 kernel: audit: type=1105 audit(1769581230.730:953): pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.735000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:30.837721 kernel: audit: type=1103 audit(1769581230.735:954): pid=5834 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:31.074610 sshd[5834]: Connection closed by 10.0.0.1 port 59702 Jan 28 06:20:31.075812 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:31.081000 audit[5830]: USER_END pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:31.086747 systemd-logind[1580]: Session 31 logged out. Waiting for processes to exit. Jan 28 06:20:31.089062 systemd[1]: sshd@29-10.0.0.25:22-10.0.0.1:59702.service: Deactivated successfully. Jan 28 06:20:31.095988 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 06:20:31.105581 systemd-logind[1580]: Removed session 31. Jan 28 06:20:31.081000 audit[5830]: CRED_DISP pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:31.167653 kernel: audit: type=1106 audit(1769581231.081:955): pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:31.167745 kernel: audit: type=1104 audit(1769581231.081:956): pid=5830 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:31.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:59702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:33.294643 kubelet[2882]: E0128 06:20:33.294576 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hpjqb" podUID="32744aca-35af-454a-aa76-f78a1f5cf3bf" Jan 28 06:20:35.287703 kubelet[2882]: E0128 06:20:35.287169 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:20:36.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:36.100070 systemd[1]: Started sshd@30-10.0.0.25:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). Jan 28 06:20:36.111735 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:20:36.112154 kernel: audit: type=1130 audit(1769581236.100:958): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:36.306000 audit[5848]: USER_ACCT pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.338074 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:36.342002 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:36.363822 systemd-logind[1580]: New session 32 of user core. Jan 28 06:20:36.339000 audit[5848]: CRED_ACQ pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.414957 kernel: audit: type=1101 audit(1769581236.306:959): pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.415089 kernel: audit: type=1103 audit(1769581236.339:960): pid=5848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.339000 audit[5848]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff85eac0 a2=3 a3=0 items=0 ppid=1 pid=5848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:36.457925 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 06:20:36.502689 kernel: audit: type=1006 audit(1769581236.339:961): pid=5848 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 28 06:20:36.502804 kernel: audit: type=1300 audit(1769581236.339:961): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff85eac0 a2=3 a3=0 items=0 ppid=1 pid=5848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:36.537764 kernel: audit: type=1327 audit(1769581236.339:961): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:36.339000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:36.477000 audit[5848]: USER_START pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.486000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.642831 kernel: audit: type=1105 audit(1769581236.477:962): pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.642959 kernel: audit: type=1103 audit(1769581236.486:963): pid=5852 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.955679 sshd[5852]: Connection closed by 10.0.0.1 port 51964 Jan 28 06:20:36.954934 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:36.959000 audit[5848]: USER_END pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.967794 systemd-logind[1580]: Session 32 logged out. Waiting for processes to exit. Jan 28 06:20:36.969590 systemd[1]: sshd@30-10.0.0.25:22-10.0.0.1:51964.service: Deactivated successfully. Jan 28 06:20:36.978153 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 06:20:36.981894 systemd-logind[1580]: Removed session 32. Jan 28 06:20:36.959000 audit[5848]: CRED_DISP pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:37.068039 kernel: audit: type=1106 audit(1769581236.959:964): pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:37.068171 kernel: audit: type=1104 audit(1769581236.959:965): pid=5848 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:36.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:51964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:38.275059 kubelet[2882]: E0128 06:20:38.274884 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-r2r59" podUID="c5caa60c-0db4-4584-8ceb-7bfff587bf9e" Jan 28 06:20:39.276035 kubelet[2882]: E0128 06:20:39.275847 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d9cfbc8-cgc65" podUID="cfce9196-a7e4-4009-b111-fc598ada449a" Jan 28 06:20:40.276726 kubelet[2882]: E0128 06:20:40.276669 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7765448cd-4smp8" podUID="17337418-3675-4e8a-a365-9d0165d3a261" Jan 28 06:20:41.274708 kubelet[2882]: E0128 06:20:41.274567 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:20:41.277063 kubelet[2882]: E0128 06:20:41.276989 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69cbcd96f4-7mm4t" podUID="c3984d8a-f7f9-40c1-99bb-6a83cd256ee4" Jan 28 06:20:41.881893 systemd[1732]: Created slice background.slice - User Background Tasks Slice. Jan 28 06:20:41.884937 systemd[1732]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 28 06:20:41.928965 systemd[1732]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 28 06:20:41.970637 systemd[1]: Started sshd@31-10.0.0.25:22-10.0.0.1:51980.service - OpenSSH per-connection server daemon (10.0.0.1:51980). Jan 28 06:20:41.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:51980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:41.975511 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 06:20:41.975551 kernel: audit: type=1130 audit(1769581241.971:967): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:51980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 06:20:42.073000 audit[5867]: USER_ACCT pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.075481 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 51980 ssh2: RSA SHA256:1Ikwrb4M6/Z2+A/4X0bUvez0dSPDN1Z4ZrPUhKJ/SsA Jan 28 06:20:42.077571 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 06:20:42.091916 systemd-logind[1580]: New session 33 of user core. Jan 28 06:20:42.093946 kernel: audit: type=1101 audit(1769581242.073:968): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.093983 kernel: audit: type=1103 audit(1769581242.075:969): pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.075000 audit[5867]: CRED_ACQ pid=5867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.118923 kernel: audit: type=1006 audit(1769581242.075:970): pid=5867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 28 06:20:42.119010 kernel: audit: type=1300 audit(1769581242.075:970): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd19f930 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:42.075000 audit[5867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd19f930 a2=3 a3=0 items=0 ppid=1 pid=5867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 06:20:42.119671 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 06:20:42.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:42.144874 kernel: audit: type=1327 audit(1769581242.075:970): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 06:20:42.144976 kernel: audit: type=1105 audit(1769581242.136:971): pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.136000 audit[5867]: USER_START pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.139000 audit[5871]: CRED_ACQ pid=5871 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.182299 kernel: audit: type=1103 audit(1769581242.139:972): pid=5871 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.274475 kubelet[2882]: E0128 06:20:42.274068 2882 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 06:20:42.327448 sshd[5871]: Connection closed by 10.0.0.1 port 51980 Jan 28 06:20:42.327777 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Jan 28 06:20:42.328000 audit[5867]: USER_END pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.334695 systemd[1]: sshd@31-10.0.0.25:22-10.0.0.1:51980.service: Deactivated successfully. Jan 28 06:20:42.337107 systemd-logind[1580]: Session 33 logged out. Waiting for processes to exit. Jan 28 06:20:42.340652 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 06:20:42.346126 systemd-logind[1580]: Removed session 33. Jan 28 06:20:42.329000 audit[5867]: CRED_DISP pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.370731 kernel: audit: type=1106 audit(1769581242.328:973): pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.370831 kernel: audit: type=1104 audit(1769581242.329:974): pid=5867 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 06:20:42.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:51980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'